DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Apertus: Open Source Cinema Project (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/)
-   -   High Definition with Elphel model 333 camera (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/63677-high-definition-elphel-model-333-camera.html)

Rob Scott October 26th, 2006 11:35 AM

Quote:

Originally Posted by Zsolt Hegyi
You can develop the disk recording part with theora while I'm working on my encoder.

Good idea -- and perhaps the raw frames to see how many FPS it can handle without any compression at all.
Quote:

Follow focus would be nice but if there's no other way then we must do it like you've said.
Given that the CPU and FPGA will be quite a bit more powerful, it may be possible to do both. I guess we'll have to see how close to the edge we are.

Wayne Morellini October 28th, 2006 08:20 AM

Good ideas guys.


Andrey, I know that you are somewhat into sensor design, I am not, but I had a few ideas today, and some older ones.

I was thinking that the sampling mechanism between four or more Bayer sensor pads could be shared, reducing fill factor. Using a multiplexer, the inputs of different pixels could be sampled sequentially or combined into different combinations producing an new value for binning and even debayering (4Mp binned 2*2 to either red green or blue for 720p bayer for that position). Because of the speed of internal sample the delay between pixels should be minimal. A larger number of pixels could be shared depending on noise induced on trace length, to further reduce fill factor loss. A large amount of data could be fed out in parallel because of the serial digital data nature. If you used a serial like A/D conversion, the fill factor loss could be reduced to a few percent. I turned to the thought of storing charge, and realised that the chip manufacture process is compatible with extra Vapour layering (or magnetic level layering). To stop over charge effects, the sampling method gives some resistance of course, but then continual sampling and dumping/discharge of over charge would give further resistance, but an over charge sink can be made to earth it (I have a unique process to do this (to handle large amounts of over charge) but it has greater potential and I have this in mind for other things). The circuits can be protected and the pads can be covered with layers promoting light gathering and capacitance. This allows for very much capacitance.

I realise that some of this sounds suspiciously a bit like a CCD and existing cmos methods, but I think it is an alternative method that can easily be added to a cmos assembly line without having to substantial change existing components, and keeping it open for production runs of other normal chip items.

What do you think about this process, your comments?


Thanks

Wayne.

Andrey Filippov October 29th, 2006 10:03 AM

Quote:

Originally Posted by Wayne Morellini
Good ideas guys.

Andrey, I know that you are somewhat into sensor design, I am not, but I had a few ideas today, and some older ones.

Wayne, I'm sorry - I never worked on sensor design myself - only used them. So I can not provide you with educated opinion on your ideas.

Wayne Morellini October 29th, 2006 10:20 AM

That's O.K.

I saw your paper on some sensor function some time ago, and thought you might be into the senor hardware side.


Thanks

Wayne.

Régine Weinberg October 29th, 2006 01:55 PM

Dear Wayne
 
Do you have a glue about the Playstation 3 and the linux disk
in France maybe, maybe middle next year ????

Wayne Morellini October 31st, 2006 04:52 AM

I have seen a news article about it. Somebody is preparing a Fedora based distribution. I suspect we will see a number. But before it is goign to be of use to us, they really are gong to have to have a method for normal Linux programs and codecs to automatically take advantage of the extra parallel processing power, or the developers are going to have to customise their codecs and other software. Which means somebody will have to ask them, or write it themselves.

http://news.spong.com/article/10969?cb=94


Ronald, I tried to email you a little while ago, to ask for some advice. Have you upgraded your mail redirection in your profile.

Zsolt Hegyi October 31st, 2006 12:29 PM

The new Micron sensor is only 1/2.5" of size. I think without a ground glass we won't have an adequate DOF. The 2/3" size is around the minimum that we could use. Their previous sensor with 1/2" size might have been worth a try (with different resolution and fps of course) but this has just became too small. Moreover, we lose 25% of the vertical resolution if we need 16:9 aspect ratio so in the end we get the size of a 1/3" sensor or even worse.

Any suggestions for a good GG? I've seen there are a lot of them on this forum - I just can't choose... All I know that I don't want any DIY stuff, I'll have more than enough work with the camera itself.

It is also a good question the maximum aperture we can use with the C-mount lens without causing problems with the sensor's microlenses. The GG and this aperture will add together to give the total losses of f-stops we'll have (that's probably around f1.5-2 if the glass loses f0.5 and we have a lens with f1.2). I suspect we'll have to use almost always the maximum shutter width the sensor is capable of, ie. something around 1/24 (this includes the readout time). This will cause motion blur as a side-effect.

Luckily, the readout time can be really short with the new sensor. On 96Mhz it runs with 12fps on maximum resolution (as opposed to the previous sensor with a 48MHz max). We'll use it in 2x2 binning mode so 60fps can be reached in the planned 1280x720 resolution - this is especially good for overcranking. I suppose this readout speed doesn't cause rolling shutter artifacts anymore so that's good news.

Andrey, what are your experiences with the new sensor? Could we do it as described above or you see some problems? Also, ony of my earlier questions left unanswered: can we create two simultaneous video streams in the fpga (one of them for the disk recording and the other for the network preview)?

Thank you,
Zsolt

Rob Scott October 31st, 2006 01:00 PM

Quote:

Originally Posted by Zsolt Hegyi
Any suggestions for a good GG?

I don't have experience with any of them, but I've noticed that none of them have explicit support for a C-mount camera; they all assume a camcorder with lens built in. I e-mailed one of the guys about this and they said to use a C-to-Nikon adapter, then use a macro lens and a 10-inch (!) extension in front of the GG adapter. I know, 10" is awfully long -- I don't know if that can be shortened.
Quote:

The readout time can be really short with the new sensor. On 96Mhz it runs with 12fps on maximum resolution (as opposed to the previous sensor with a 48MHz max). We'll use it in 2x2 binning mode so 60fps can be reached in the planned 1280x720 resolution
That was my thinking too. Sounds good!

Zsolt Hegyi October 31st, 2006 01:29 PM

Quote:

I've noticed that none of them have explicit support for a C-mount camera
I was thinking putting a c-mount lens on the camera and the GG after that. I don't know which solution is better, perhaps it depends on the quality of lenses.

Quote:

and a 10-inch (!) extension in front of the GG adapter
It isn't a problem if we need those classic 15mm metal bars anyway to mount the whole thing.

Zsolt

Wayne Morellini November 1st, 2006 07:44 AM

Quote:

Originally Posted by Zsolt Hegyi
Any suggestions for a good GG? I've seen there are a lot of them on this forum - I just can't choose... All I know that I don't want any DIY stuff, I'll have more than enough work with the camera itself


Most of the static projects struggle to get HD resolution, so looking carefully through the threads for HD capable adaptors. Even moving ones, I don't know what the quality will be.


Thanks

Wayne.

Zsolt Hegyi November 9th, 2006 01:18 PM

I've been browsing through the m-audio/e-mu catalogs and it seems like they don't write linux drivers. So we won't be able to connect them to the camera. I think the audio will have to be recorded with the pc instead (we need one anyway for the camera control/viewfinder functions).

Zsolt

Wayne Morellini November 10th, 2006 09:37 AM

Eliminating the need for a PC would be quiet good.

EMU and maudio are not the only ones, Terratac??, and a number of others exist. Try looking on www.via.com.at website under there Envy 24 audio processors for manufacturing partners and products based on them. these manufacturers also produce items like those. www.digit-life.com , may also still have reviews of such products (their reviews tend to be quiet good) but the companies mentioned are worth a look at.

If you look through the linux stuff you might also find drivers and API's for the USB sound cards, even for specific models like the Audigy series (now much improved). I think I remember something about Audigy and Linux.


You don't really need a computer, most of the work is now in camera. You could use PDA, Micro-controller board, Sony PSP, or something like http://www.gamepark.com/ pocket game machines (some are Linux). The PSP is the best, but not strictly open, it has re-programmable processing line they use for sound and 3D, sort of like FPGA a bit and has good processing abilities. It can provide controls through their own controls, and display, but the camera could give live video feed port if made that way.

Andrey Filippov November 11th, 2006 10:02 PM

Update on 353
 
Extra work was needed to make the board boot from the NAND flash - that software is not yet available from Axis. But that seem to be done and camera now boots Linux from its flash memory. Next week I'll be working on trying the FPGA,

Régine Weinberg November 15th, 2006 10:25 AM

good luck
 
That sounds great and maybe a dream comes true.
A stupid question anyway.. for 10 minutes , the estimated disk space and sustained rate should be what about

Serge Victorovich November 16th, 2006 09:32 AM

Wavelet-Based RAW Compression
 
The usage of KWEII Wavelet-based Compressed RAW format for the purpose of camera inner storage has following advantages over JPEG:

1. Wavelet compression takes less time than JPEG compression does. RAW data compression leads to speeding up of the capturing process.
2. If two same-size compressed images are compared, the RAW compressed file has better quality than JPEG does. (Or, for the same quality, RAW compressed file is smaller).
3. Preparation of a small version of an image for its displaying on a phone screen for a quick check or sending it via radio channel becomes easier and does not require an extra storage space. A Wavelet compressed file has a special structure: it contains smaller versions of the image as its part. For example, if camera provides a 1,2 Mpix image, to get a VGA-size image only a part of a compressed file should be used for decompression and processing. Even smaller parts will be used for 1/4 VGA-size and 1/16 VGA-size images.
Two processes should be clearly distinguished:
* Image processing for data storing;
* Image processing for displaying on a cell phone screen.
These processes are independent and second process does not assume that the first process should be previously completed.
4. Because of Wavelet specifics, the noise level is automatically reduced upon resizing.
5. For data storing purposes, an image might be processed with phone CPU in a background (with lower priority) during a longer period of time and, as a result, a better quality of an image is provided.


Text above is about phones :) But same type of compression is used in RED, SI2K...
All correction of recorded RAW in post! If you really wanna cheap, but good HD cam choose right compression - Wavelet Based RAW!


All times are GMT -6. The time now is 12:18 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network