SparkFun Forums 

Where electronics enthusiasts find answers.

General suggestions or questions about the SparkFun Electronics website
By follower
#113119
Please keep the conversation civil.

Camera modules are notorious for having poor data sheets and while this one is perhaps better than most it's quite dense.

You might want to look at a few of the links (at the end of the linked section) about how to read to read datasheets on my site: How to read datasheets

--Philip;
User avatar
By thebecwar
#113121
follower wrote:Camera modules are notorious for having poor data sheets and while this one is perhaps better than most it's quite dense.
--Philip;
I'd extend that by saying any component not designed for use in the hobby electronics market has a more complex datasheet. Do you need a degree in engineering to read one of those datasheets? No. (How would I know? I don't have a degree, and I'm not currently a student.) Does it take patience? Always.


The camera interface is as follows:

In RGB mode the camera sends 2 bytes (16 bits) per pixel. These 2 bytes are encoded using, as leon mentioned, a 565 sequence. The first 5 bits are the red value. The next 6 bits describe the Green channel. The final 5 bits describe the blue channel. (Leon already mentioned this, but it bears repeating.)

The camera spits out the data in relation to Dclk. The vd line tells you that there's a new frame (image) available. The hd line tells you that a new scan line has started. When both lines are high, the data will be clocked out of the D0-D7 lines. Every time the clock transitions from low to high the next byte is available. (It's a pretty standard clocked 8 bit parallel bus design.) When HD goes low, You know that the current scan line is done. Once HD comes back high, you're clocking data out for the next scan line. After all the scan lines have been sent the camera will take HD and VD low, and hold them there until it's ready to clock out more data.

Setting the camera up is done using the I2C bus connections. Sending the camera's address (it's in the data sheet) along with the register you want to change settings in, and the value for the register, will change the setting. According to the report from the Army, the only command you need to send to the camera is 0x02.

NOTE: The camera is NOT a 5v or 3.3v device. It might work with those voltages (I wouldn't try it) but most likely it'll release the magic blue smoke and crap out on you. Logic High will be the same as the camera's voltage source (it's in the datasheet.) Logic low is ground, or 0v. ALL the IO pins on the camera are digital IO.

What do you need to provide the camera?
-Voltage (VDD) and Ground (VSS)
-Dclk - A square wave with a frequency >= 6mHz
-I2C interface - You'll need to tell the camera to start taking frames.
-2 digital inputs for the camera's control lines. You NEED to make these interrupts, because the camera will start sending data whether you're ready or not.
-8 digital inputs -- This is your parallel input bus.

Keep in mind that you have .16 microseconds (assuming Dclk of 6mHz) to read the byte, process the byte and get ready for the next byte. With the Arduino running at 16 mHz, that gives you approximately 2 processor clock cycles to play with. 2 clock cycles isn't much to play with. You may have time to execute 1 ASM instruction. Even with a processor running at 20 mhz, you've only got 3 cycles. A Dclk of 6mHz also only gives you ~ 9 frames per second. Great for still shots, but if you want full motion video, you'd need about 15-18 mHz at Dclk.

In short the Arduino has nowhere near the processing speed to handle directly interfacing this camera.

If you really want your arduino to interface with this camera, you need something in between, that can buffer an entire frame's worth of data from the camera and spit it back out slowly to your arduino. You could use an ARM7 or ARM9, or program an FPGA to do the job. The clock for a DSP would need to be approximately 50-100mHz to make it able to handle the task.
By d4n1s
#113128
@thebecwar u gave the best response so far thank you very much. Now I have all the info I need!

Btw I will be using still frames for tiny image processing(comparing pixels between them) aprox 1fps. I think arduino will make it what do you think?
User avatar
By thebecwar
#113131
If you drop the camera's Dclk rate to 1 MHz, and code directly in ASM, you might be able to do it. I say might, because we aren't talking about a lot of head room here. Plus the amount of data being slung about is pretty significant. Even a 640x480 image will take up about 600k of RAM. (640x480 = 307,200 pixels * 2 bytes per pixel = 614,400 Bytes = 600kB) If you're comparing 2 images, that's 1200kB of raw data.

I think you'll run into the biggest hurdle trying to time out the interface.

Input/interrupt processing on the AVR takes (at a minimum):
4 clock cycles to enter the interrupt vector
2 clock cycles to store the register value (assuming you're using all 8 pins from only 1 IO Port) and post increment the pointer.
2-6 clock cycles to test the pointer, figure out if you've gotten 600k and jump if you need more data
(This assumes you aren't returning from the interrupt. Add 4 clock cycles for each pixel if you are.)

Each clock cycle @ 16MHz is 0.0625 microseconds (us). If you're running the camera at 1 mHz, the Dclk will cycle low->high every microsecond. At 1mHz the camera image will be over exposed and washed out at best. This is the best case scenario. Anything over .5 us, and you'll miss the falling edge of the clock pulse and miss your chance to read in the data. If you code very carefully you might be able to get away with 1 us per pixel, but I can't say for sure. The best case scenario has it at 8 cycles to process a pixel, with the worst case being 12. 8 clock cycles is 0.5 us, and 12 is 0.75 us.

There's little to no room for mistakes. Keep in mind also, that you need to generate the square wave clock pulse. You could do that in your assembly code, if you're VERY careful, but it's tricky at best. Triggering an interrupt on the falling edge of the clock might work, but that adds complexity, depends on an external clock source, and adds at least 4 cycles to your loop.

In theory it can be done. The timing requirements are what'll most likely end up making or breaking the project. The 8bit AVRs are great MCUs but they are not really up to the task of processing data coming in at 8 Mbps. (Yes that little camera is really shoveling 8 million bits per second @ 1MHz.)

I'm not saying it's impossible, but it's like building a rocket powered Honda. You might get from point A to B, but in between, you'll probably want to pray.
By d4n1s
#113140
I get what you mean, and no, I will be processing the same image, I won't be outputing video. Imagine it like that. I capture a picture, when the first pixel comes I save the RGB colors in 3 variables, each pixels arrives next I compare its RGB colors with the current pixel, if it is lets say... more green I replace it and so on for the rest of the pixels. I will also be using a counter to identify the pixel coordinates. That will take me about 5 ram places and I guess comparing won't be that memory consuming.

Whats the worst case scenario? Is there any possibility that my camera and/or arduino processor will be baked? (Assume I have done the logic level conversation successfully. )
User avatar
By phalanx
#113141
Arduinos simply aren't up to the task of acquiring, storing, and processing image data in a reasonable time frame. Microcontrollers in general excel at responding to discrete events but are not designed or intended to be used with large amounts of streaming data. More powerful microcontrollers like the ARM line may have fast enough execution times to get basic functionality with a 640x480 camera but even then you are consuming a huge portion of the available bandwidth with little room to do anything else. Higher resolutions and frame rates without a JPEG output option is not feasible with your standard microcontrollers.

Non traditional microcontrollers like the XMOS and Propeller might be more up your alley for applications like this. Both controllers have multiple cores and the XMOS is capable of running at relatively high frequencies. You may be able to dedicate a core to acquiring an image while other cores are busy processing data as it becomes available. You would have to do the due diligence of researching their capabilities to see if they can meet your requirements.

The ideal solution for camera interfacing is to use programmable logic like CPLDs and FPGAs. These devices excel at working on repetitive tasks involving large amounts of high speed data making them suitable for any camera device you throw at them. They can simultaneously acquire an image, store it in external memory, compare the differences between the new and old frame, and make the data available through an SPI, I2C, or other interface of your choice. The problem with these is there is a steep learning curve to understand how they work and how you are supposed to program them. Traditional programming concepts taught to C programmers don't apply here.

thebecwar already did a first round of go/no-go timing checks for the Arduino and says you are pushing it even running the camera at its slowest while using highly optimized assembly code. Because of the lack of experience you have in this subject matter, I would consider finding another way to tackle your project.

-Bill
By propjohn
#113145
This part really isn't intended to capture single frames. I'm not sure it can even be done without another processor or FPGA to grab frames and store them for you. If you can't clock out all frames before the next starts to capture I can see three possibilities:
  • You might confuse the device and it crashes.
  • The device might reset the readout pointer to the beginning of the buffer (partial image transfer).
  • The image transferred would be composed from data from successive frames. Any sort of motion would create a fragmented looking image
It might be possible if the frame buffer can be read while the camera is in sleep mode. Completely untested armchair design: Issue wakeup command via I2C, wait for a rising edge on VD, Issue sleep command via I2C, wait for 3 frames (200ms @ 15fps), clock out the data.
User avatar
By thebecwar
#113148
Running the numbers again, I don't see any way that you can clock the camera from your processor running at 16MHz. Typically for a clock signal you'd use a timer interrupt, but even the most optimistic estimate places this at 9 clock cycles which is more than 180 degrees of Dclk. Even at 20 MHz, you'd only have one clock cycle in between interrupts. You could still code it manually in ASM, using some very careful math, and a close reading of the processor's datasheet, but it would be an uphill struggle at best. (note this only applies to a 1 MHz Dclk. The 6MHz clock required to start the camera's operation would impossible as an interrupt based clock.)

Running at 16 MHz, your fastest interrupt driven clock would be about 1.77 MHz. Assuming you want no spare clock cycles to do anything else, like reading the data on the bus, averaging the data, etc. 8Mbps is a LOT of data for an 8 bit processor to handle. At 20 MHz, your maximum clock increases to about 2.22 MHz, and spares you on average 1 clock cycle between interrupts to do anything.

Manually coding in a 1 MHz clock without interrupts is possible, but requires you to set the clock bit every 8 clock cycles. That requires that you know with 100% accuracy how many clock cycles each of your instructions takes, and would require a lot of forethought and planning to ensure that your clock is consistent. (example: you have a clock update 3 cycles from now, but your next instruction takes 4 cycles to complete. You'd need 3 NOPs to fill the space.) You'd also be dedicating 12.5-25% of your computing power to the simple task of running a clock (2 or 4 cycles out of every 16).

Because of all the above, I wouldn't recommend using this camera with a processor running at any less than 50MHz. Could it be done? As an academic process I'd say it's possible, but that's with no looping, no branching, limited conditionals, and almost no data processing. Any 16 bit arithmetic, floating point numbers or subroutines would make life a living hell.
propjohn wrote:It might be possible if the frame buffer can be read while the camera is in sleep mode. Completely untested armchair design: Issue wakeup command via I2C, wait for a rising edge on VD, Issue sleep command via I2C, wait for 3 frames (200ms @ 15fps), clock out the data.
Looking at the camera's sheet, it looks like it will finish sending an entire frame before it goes into a powered down state. Also, sleep mode doesn't allow you to clock out the data.

The only way I see this working is to find/create an IC/CPLD/FPGA/DSP or an ARM7/9 based processor that can buffer the frame data for you.

--tB--
@Phalanx - Thanks for sanity checking my math. I've been running magnetic flux equations all day, and I wasn't sure if my numbers were in the right ballpark.
By d4n1s
#113156
I just can't understand how it is 8mbps? I wont be encoding the image, just handling pixels on the fly. doing 492*600*2 (the screen dimensions which are the pixel counts * 2 byte per pixel (I will be outputting on RGB) It becomes 590 400 or 590kbps. I could possibly do 0.5 fps if u still consider that taking snapshots each second is too much. Also I don't know if u included this in ur calculations but I won't be outputting anywhere the data so far. I will just be calculating the 2d coordinates of the brightest pixel in the screen (using math).

Also I am sorry to ask but if my arduino clock speed is at 16 Mhz why would it be hard to use a camera which has much lower frequency?

I wont be doing any other operations...
User avatar
By phalanx
#113158
d4n1s wrote:Also I don't know if u included this in ur calculations but I won't be outputting anywhere the data so far. I will just be calculating the 2d coordinates of the brightest pixel in the screen (using math).

Also I am sorry to ask but if my arduino clock speed is at 16 Mhz why would it be hard to use a camera which has much lower frequency?
The large posts by thebecwar are trying to show you the difficulties in simply getting the raw data off of the camera using an AVR controller. At the slowest you can realistically clock the camera, you are pushing the physical limits of what the AVR can muster to simply produce the clocking signals for the data. This leaves you no time to move data let alone perform compare operations on it. The AVR and every other 8-bit MCU I can think of are simply not designed to operate in this type of application. You can use more powerful controllers like an ARM 7 or 9 but those only solve the problem by being able to execute more instrutions per second. The general inefficiency is still there and you will hinder the ARM's ability to perform other tasks.

FPGAs can work on this kind of data without breaking a sweat. Their architecture makes them especially adept at processing repetitive loops with tight timing constraints. If you want to move forward with this camera, that would be your best option.

You could also try to find a camera that outputs JPEG images which would reduce the hardware requirements of your controller but would increase the complexity of your code since you would have to handle the decoding of the image.

-Bill
User avatar
By thebecwar
#113163
ok... the 8 Mbps comes from 1MHz (1,000,000Hz) * 8 bits per transfer = 8,000,000 baud = 8 Mbps. Remember little 'b', so we're talking bits. For comparison to ethernet/wifi, it works out to about 7 MB/s (Big B = Bytes). This is several orders of magnitude more than serial, even 115200 baud. (69.4 times more to be exact)

Framerate is set by your clock speed not the other way around. At 1MHz, according to my references you've got about 1.5 FPS. Much below 1MHz and your image won't be clocking correctly, and will be too overexposed to do you any good.

The clocking situation requires an understanding of how accurate clocks are usually generated in software. (Yes, some chips do have clock dividers built in to provide this, but we're talking about a software solution.) Usually when you want to generate a clock, you set up a timer that throws an interrupt every time it overflows/hits a specific value. Knowing your processor's clock speed, you can calculate with reasonable precision the amount of cycles required to get the clock rate you need. You also have to realize that the effective rate of the timeout is twice the frequency of the desired output clock. (We'll ignore actually reading out the data for now.)

The clock generation using a timer, fires an interrupt every time it overflows/reaches it's set value. On the 8 bit 'mega' AVR chips, calling an interrupt vector requires 4 instruction cycles. (The current instruction pointer needs to be pushed to the stack, the offset of the vector needs to be read out of memory, and the processor needs to jump to that location.) Returning from an interrupt also requires 4 clock cycles. (The processor needs to unwind it's execution back to the point that it was at before the interrupt was fired.) 8 Processor cycles, and all you've done is move from one spot in memory to another. You haven't actually done anything yet.

Inside the interrupt vector we need to either bring the clock pin high or low depending on it's current state. You could do this with branching logic, but that always requires more clock cycles. We'll use a 2 instruction method. (I'm using some pseudocoded assembler in this example. It may need a MOV instruction to be accurate.)
Code: Select all
LDI Rd, 0x01 ;We're using pin 0 of port b for this example
EOR PORTB, Rd ;EOR=XOR
Adding the 2 cycles we need to toggle the clock to the 8 we need to load and clear the interrupt, we have a total of 10 processor cycles per clock transition. 20 For every cycle. Therefore, the fastest clock we generate is about 1/20th of our processor's clock. (I say about because it's after midnight and my math skills decline after 11.) 1/20th of 16MHz is 800KHz.

Add in instructions to fetch the data off of the parallel bus when the clock goes low and you can see how it's mathematically improbable that you have enough processor speed to clock the camera.

Hopefully this is clear enough. Parallel data buses are easy to work with, but at higher throughput they can be quite a pain to work with, especially on a limited platform like an MCU.
By d4n1s
#113222
Should I put resistors to the output pins or only input pins of the camera to make logic level conversation? In which I should?