DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Apertus: Open Source Cinema Project (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/)
-   -   High Definition with Elphel model 333 camera (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/63677-high-definition-elphel-model-333-camera.html)

Juan M. M. Fiebelkorn January 19th, 2007 07:09 AM

Ok.Not trying to sound aggresive but I feel that we got lost in translations.
Most of the explanations I wrote seem to not function at all.If you can please read again my post above.

Second try:

You said

"I'm not sure about that. The ITX boards have a fairly limited CPU performace compared to the desktop ones and if you push huge data in through the PCI bus although you have the chance to record it without involving the CPU but if you want a real-time viewfinder image then you must do a lot of computations on the fly. If you're going HD then you might find the ITX boards too weak. Have you got any test results so far? "

In reply to this

"we can send from the camera head two streams (if necessary) one is the full resolution compressed bayer and the other one could be a quarter resolution 8 bit per color channel image which wouldn't need debayering and could also be lightly compressed so to be sent to viewfinder."

My question is, where are those X86 computations you mention applied?

The second quarter resolution 8 bit per color channel slightly compressed stream is supposed to come from the camera head and doesn't need demosaicing because it already has one color sample per pixel.The only computation if any would be decompressing it.You could even send a 4:1:1 or 4:2:0 stream instead.The conversion to RGB will be performed inside de GPU (Graphics Processing Unit).Also scaling.

(SI at the begining said that they needed a Geforce FX just to display a crappy 1024x768 2D video)

If a 1 GHZ NanoITX can decode Mpeg2, why not decompressing something a lot way simpler?

(SI's camera, as I said before, needs a Pentium-M because of the wavelet based Cineform codec which is really heavy, not because of problems for displaying the incoming video stream.)

On another side 99.9 % of people don't know how to design a PCB or code an FPGA (VHDL or Verilog whatever you like).I'm within that number.I don't even know how to code in C.Don't know how to perform operations at the bit level and can't even implement a huffman table.What I did was just some internal testing to check image quality and compression ratio.

There are enough examples for things not working because of lack of a suitable platform for coders to work with, and I have the fear that this one, at least within the Video camcorder area, could become one of them.

Nano-ITX

http://www.via.com.tw/en/products/ma...erboard_id=361

http://www.digit-life.com/articles2/...ia-epia-n.html

Mini-ITX

http://www.via.com.tw/en/products/ma...erboard_id=450

Final Note: MS Windows is a no go for this system.

See you.

Zsolt Hegyi January 19th, 2007 11:19 AM

Quote:

"we can send from the camera head two streams (if necessary)"
Whoops, my bad. It seems like I accidentally deleted this part when replying so it got left out. Answering to this topic: I'd be glad if we could send two streams from the camera head. After a quick review of the 333's fpga architecture I came to think that we can't but we need Andrey's answer on this because I'm still not sure what modifications are needed to achieve this.

Quote:

Final Note: MS Windows is a no go for this system.
Agreed...


Zsolt

Wayne Morellini January 21st, 2007 12:14 PM

Quote:

Originally Posted by Juan M. M. Fiebelkorn
..
Ari Presler is a really smart guy and he has a sharp sense for business.
At the beginning I was heading for FPGA compression at the camera head level.Ari decided to go for the X86 software, so he can get technical and commercial support from Intel and Cineform and even having the option of Adobe Premiere installed on camera so you can use the camera for editing, Ikegami style.

Hmm, exactly like what I was suggesting back at the start of the digital cinema camera projects. I was against the increased complexity for doing FPGA and custom camera for the first cameras (due to complexity) but also suggested using the Elphel back then because it already was designed. Unfortunately, it turned out that doing machine vision software modification for live recording was actually an relatively simple thing, according to people from machine vision companies that now have software versions to do that, that I talked to in the last year or so. The only people that listened to me, and asked for my advice, went on to actually make commercial cameras.

I think you should all get together, in a spirit of open source, and brotherly love, and make as many codecs versions as you like, and the camera work of course?

Wayne Morellini January 21st, 2007 12:39 PM

I forgot, the Elphel really needs better sensors then the previous Microns, this is the one biggest problem. While we can jazz the color up, even normal cmos HDV cameras are starting to produce latitude and low noise. With an better sensor, they can maintain an edge over HDMI capture solutions on cheap cameras, the problem is that this is just infeasible. The solution would be a much more expensive sensor board for indie people using the best of the second class solutions (even the special Ibis implementation as implemented on the Drake). In reality, I have only come across a few sensor technologies that lead for the price. FF and Kodak are the old ones, Foveon, and another that I posted here before, with 170db latitude and super super sensitivity (name slips me). Any others I can't remember.

I have talked with Andrey before about the possibility of using the Elphel unit as an encode/capture device for other cameras. I think, as far as Andrey is concerned, if you want to do it, then you are welcome to. To it's natural conclusion, if the camera was designed to take, apart from just component/HDMI, USB/FIREWIRE/GIGE/Machine-vision, then it could replace a computer for acquisition from other box cameras.

Jason Rodriguez January 21st, 2007 05:13 PM

Juan is right . . . for "quadlet" based demosaic, you don't need a powerful graphics chipset . . . we've been able to-do it on any of the modern Intel embedded chipsets such as the GMA900 and 950. Quadlet is REALLY easy since you're just either averaging or picking one of the green bayer channels, and then combinding that with a R and B channel to make a RGB pixel . . . the resolution is a quarter of the HD full-resolution version, but it's VERY fast to-do.

You'll have issues on a 855GME based system because of the way it was constructed with minimum memory bandwidth and the north-bridge/southbridge interlinks, but the 915 and 945 chipsets will work just fine for ingest and display of HD quadlet data in real-time.

With the Core 2 Duo, Mini-ITX boards can be VERY fast. And if you're not requiring CineForm compression or something of the equivalent, then using one of the LV versions of those chips (so only 10-15W max TDP) would seem like a pretty no-brainer solution as it could be passively cooled (no fan required, just a really big heat-sink) and easily battery powered.

Zsolt Hegyi January 22nd, 2007 08:28 AM

Quote:

even the special Ibis implementation as implemented on the Drake
What's with the Drake nowdays? Are they still in business? If they aren't then we might just ask them to provide us with the design of that sensor board and attach it to the Elphel...

Zsolt

Matteo Pozzi January 23rd, 2007 02:04 AM

for me the images of the elphel 333 look better then the drake ....but this is my opinion

Wayne Morellini January 23rd, 2007 03:27 AM

Well, back. I have discovered a new CCTV sensor promising 2000 times more low light sensitivity and an color hi-res version:

http://www.marketwire.com/mw/release...ease_id=198786

Good news, there are more options now.

Wayne Morellini January 23rd, 2007 03:51 AM

Quote:

Originally Posted by Zsolt Hegyi
What's with the Drake nowadays? Are they still in business? If they aren't then we might just ask them to provide us with the design of that sensor board and attach it to the Elphel...

Zsolt

I don't know, Rai has never told me (like nobody ever has mentioned what happened to the Sumix camera, which was a very good opportunity for them). The Drake was an very good opportunity, but unfortunately it would need to price against the Red/Silicon, which means higher resolution or lower price. I take it that the Drake is gone. Wherever you and can pry the sensor broad design from anybodies dead hands is another thing. But even though it had an descent well capacity and SN (through the external circuit) and good latitude, it would be good to look at alternatives too. It would be interesting if Cirrus re-targeted the FF Ibis chip towards higher quality or lower price.

I am now of the mind that 50/60fps and an pocket camera would also be useful for portable production, on the video side, along side a full digital cinema type camera. Unless you can spend the time to do cinematic style shooting (to reduce judder in moving objects) the smaller size and higher frame rate matters.

Wayne Morellini January 23rd, 2007 04:51 AM

More news on that sensor. it appears that it uses an quantum effect to move thousands of electrons for each photon. I mentioned other research that got two electrons moved by one photon, an number of years ago, now it looks like QE efficiency ratings just went out of the window.

http://www.pcmag.com/article2/0,1895,1908418,00.asp

Matteo Pozzi January 23rd, 2007 05:53 AM

I've also seen this info 2 weeks ago but I've found no images produced by this sensor and no spec info about. L'et's look on it

Nikos Olsen January 23rd, 2007 06:06 AM

Wow, I guess I managed to read the whole thread! This project seems truly interesting.

What I was wondering is when the 353 arrives, it most likely will have support for a HDD. And you could use something like a Nokia 770 to control it. But how are you going to monitor the video? Through the Nokia monitor? Has it got enough resolution to adequately pull focus and such? It is after all HD. Or would it be possible to add an additional monitor? Or would a UMPC be the best solution if it has a high res screen?

Zsolt Hegyi January 23rd, 2007 07:14 AM

I always forgot that with the new Micron sensor we can go up to 60fps which is around the minimum if we want to avoid the use of a mechanical shutter. I think fps is the first criteria to choose by and only then we can take a look at other parameters. (And its resolution is also really good for the Elphel because it's around the limit we might compress and record with the camera.)

Btw, about this new sensor they write:
Quote:

the sensor can capture images when light level is less than 1 lux (or up to 0.1 lux).
This can mean that it won't work when the light level is higher than that threshold.

Zsolt

Nikos Olsen January 23rd, 2007 04:30 PM

1 Attachment(s)
Made a GUI idea designed for touchscreen purposes. Wanted the GUI too look simpler and not as "frightening" as the current display, and also less industrial.

It´s just a quick sketch but any suggestions on improvement? Am I heading in the right direction?

Oscar Spierenburg January 24th, 2007 07:25 AM

Nikos, do you mean to design this for the Elphel? I own the 333 and I'm really looking forward to a more simple setup/control panel, dedicated to film making.
Maybe something that also implements audio recording.


All times are GMT -6. The time now is 12:23 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network