DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Apertus: Open Source Cinema Project (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/)
-   -   High Definition with Elphel model 333 camera (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/63677-high-definition-elphel-model-333-camera.html)

Charles Hurley February 16th, 2007 01:12 AM

If you guys are still looking for a viewfinder solution you might think about going old-school.

http://www.imagehosting.com/out.php/...1_Picture4.png

http://www.imagehosting.com/out.php/...0_Picture3.png

I've loosely followed this thread and enjoyed it. Take Care, Chuck.

Matteo Pozzi February 21st, 2007 01:02 PM

very interesting lens but are very expensive and you lose a lot of light with them and for this type of adapter we need as much light as we can!
if you use it directly with the camera cause the sensor is much smaller than a 16 mm film you will get a tele lens starting from a wide one
so it depend of what you want to do!

Ron Lemming February 22nd, 2007 07:06 AM

So, if this is a network camera, will it work with wireless ethernet?

Andrey Filippov February 22nd, 2007 11:14 AM

WiFi connection to the camera
 
Quote:

Originally Posted by Ron Lemming
So, if this is a network camera, will it work with wireless ethernet?

http://www.google.com/search?q=wl330g. It is ~4W, can work as a client, not only AP. Unfortunately it does not have an external antenna connector so I had to add it. And a small hack in the camera - it should be activated from the wired connection (like a simple ping to it's IP) before it will start responding to the wireless.
With dish antennas I connected over some 3km

Andrey

Rob Scott March 1st, 2007 04:40 PM

Andrey,

I've been thinking about the best way to utility a 333/353 camera for filmmaking using an IDE hard drive. Storing the raw data -- even compressed -- seems like a long shot.

A while back, Wayne mentioned the possibility of using 16-bit grayscale JPEG to compress each of the bayer "channels" separately, thus preserving the high-bit-depth and (hopefully) introducing very little noise.

What do you think of that idea? Would it be possible to adapt the existing JPEG FPGA code to support 16-bit grayscale?

Thanks,
Rob

Phil Stone March 2nd, 2007 03:48 AM

Quote:

Originally Posted by Ron Lemming (Post 629889)
So, if this is a network camera, will it work with wireless ethernet?

I had the older 313 working over wireless to XP & the VLC player/recorder but streaming video is limited with the lower bandwidth.

Wayne Morellini March 2nd, 2007 09:26 AM

More compression ideas, re-edit hdmi storage.
 
$@%%$^ Crashed during my post, 756323847 Opera.

What are people currently working on, maybe we should wait. But for now, why not test the quality of saving RAW Bayer frames as Gray scale Jpegs, no modification needed. Though scaling the red and blue pixels to equivalent green values before grey scale might improve quality a lot, they can be restored to bayer post production. These are the things we can do simply now.

Scott, that is an good idea in the longer run (but can't remember which one that was). Was it, take three separate color channels, do all the elimination stuff that Jpeg does on all values for compress green channel first, then do difference compression on the remaining color channels to the green (first scaling to green and even comparing to the average of the surrounding green (simple interpolation) preferably)? There are already people doing projects, whats are they, maybe we should wait, and do the simple grey scale experiments in the meantime. When you look at it, you could treat it like an 4:4:4 image, an full resolution interpolated green frame could be produced then RED and blue interpolated frames compressed through difference. The interpolation is already done to get Bayer to Jpeg anyway, this is an extra step.

We should not get tied up on Intra versus Intre, simple intre can give us huge gains in quality, as it allows more into the current bandwidth, which means more quality. Though what will really make an codec shine in small bandwidth, is an sensor with high Signal to noise ratio to eliminate noise, and lead to low noise in dark situations. Using an memory buffer to store the compressed frame and average out the flow, this would allow for bigger higher quality gop frames, and more space fro the gop at the start of scene changes etc. I typed this up before, but cannot remember now. The intre compression could be done between subsequent adjacent Jpeg preprocessed elimination images held in the memory buffer, to maintain even image quality. What is saved is space and the finale steps of Jpeg processing.

I have approached Andrey before about the possibility of using the main camera controller as an component and HDMI compressor storage unit, with IDE we finally have the bandwidth to do this well. I also have been in contact with tzerotech (and Analogue devices) in times past, about their UWB wireless wavelet HDMI technology as an way to record HDMI to computer, or through an direct USB version. Such an device can be ultra cheap, and hook directly to your laptop or Ultra Mobile PC, even, no brainier, saving directly to an portable hard disk enclosure. From memory, 100-200Mb/s 4:2:2 wavelet should be expected (faster is possible). Most of the work would have already been done, just an matter of interfacing USB to an reference design, and arranging the driver, if it has not been done already. The price could beat the pants off an PCIE based portable computer system, and be attachable to an camera.

Unfortunately, engineers tend to be more interested in their latest glamours projects, then these simpler faster solutions.


Thanks

Wayne.

Rob Scott March 2nd, 2007 10:01 AM

Quote:

Originally Posted by Wayne Morellini (Post 634656)
Was it, take three separate color channels, do all the elimination stuff that Jpeg does on all values for compress green channel first, then do difference compression on the remaining color channels to the green (first scaling to green and even comparing to the average of the surrounding green (simple interpolation) preferably)? There are already people doing projects, whats are they, maybe we should wait, and do the simple grey scale experiments in the meantime.

From my own experience, the two green channels should not be combined; you would separate the four channels -- R, B, G1, G2 -- and compress each one as a separate 16-bit-deep grayscale image. No interpolation would be done at this stage; the resulting 4 mini-JPEGs would be written to disk.

Offline, you would decompress and recombine the 4 channels, resulting in a true raw image with (hopefully) just a tiny bit of noise/distortion from the JPEG compression. Then, finally, the Bayer interpolation would be done in order to end up with a high-bit-depth file such as 48-bit TIFF or OpenEXR.

Obviously, this would only work if it was possible to adapt the JPEG FPGA code to support 16-bit grayscale.

Wayne Morellini March 2nd, 2007 11:19 AM

I suspected that might be the case. I think the two greens could be combined, but it might be tricky to get right mathematically, I suspect some combination where the red and blue are matched to alternative channels. The rest of my stuff is only to improve performance over the scheme you mentioned, but does not need to be done that way.

Wayne Morellini March 2nd, 2007 11:29 AM

Sorry, posted before I finished.

The Jpeg compression routine might not act the same way across all four images, I think that an normalised single Gray scale image might do better. If we went the extra step to normalise the image, the existing interpolation section in the camera might handle it, by telling them to interpolate to green in an fashion that just scales the red and blue values.

Andrey has mentioned compressing the Bayer image as an gray scale Jpeg before, so I think it would be supported. Did you get an camera?

What do you think Scott?

Andrey Filippov March 2nd, 2007 01:19 PM

We used compressed color images as monochrome (with Bayer processing later) in our model 323 cameras (http://elphel.cvs.sourceforge.net/el...v?view=markup). Actually we rearranged the pixels in each 16x16 macroblock to reduce high-frequency components caused by color tone - that made compression more efficient.

As for more bits per pixel - I do not see any need for it with current sensors. Micron 5MPix has about 8ke- of pixel FWC, so even with it's 12 bit output the number of the levels that can be distinguished is far less than 4096. So I believe "gamma" table (actually - optimized for noise performance table) can compress the 12bit range into 256 without sacrificing sensor data.

Matteo Pozzi March 2nd, 2007 05:48 PM

I think that a good quality mjpg file is more than enough ....if a jpg is a standard in digital camera why we need more than a good jpg compression for video where a picture remain only 1/24 of a second ....I prefer semplicity over big file!

Andrey Filippov March 2nd, 2007 08:38 PM

I made a simple javaScript program that calculates number of bits that are needed to represent image pixels without loosing sensor data
http://www.elphel.com/actualbits.html

Rob Scott March 3rd, 2007 07:32 AM

Quote:

Originally Posted by Matteo Pozzi (Post 634940)
I think that a good quality mjpg file is more than enough ....if a jpg is a standard in digital camera why we need more than a good jpg compression for video where a picture remain only 1/24 of a second ....I prefer semplicity over big file!

For simplicity, you'd be better off with a regular DV or HDV camera! :-)

But seriously, there are reasons to press for (nearly) raw images with as few compression artifacts as possible -- for example: video effects work, where reduced color space and noise will interfere with the quality of a chroma key mask.

Rob Scott March 3rd, 2007 07:36 AM

Quote:

Originally Posted by Andrey Filippov (Post 635010)
I made a simple javaScript program that calculates number of bits that are needed to represent image pixels without losing sensor data

Thanks for putting that together, Andrey, it's very instructive.

Question: How does pixel binning affect this? For example, you can configure the Micron 5MP sensor for 2x2 binning, resulting in a 1.25 MP image with less noise. How many effective bits of resolution would this image have?


All times are GMT -6. The time now is 04:02 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network