CMOS camera as a sensor

CMOS camera as a sensor

Today I am going to talk about low cost and effective image processing for very specific embedded applications. I am not talking about robots recognizing their environment or finding their way to a power plug, but rather using small CMOS camera as better sensor. We have used this technology for various clients in our consulting service, so I am not going to get into the very specifics of any of those applications cause it would be a breach to our NDAs. Still,  IKALOGIC aims to educate and share knowledge to the world. Considering that, I thought about writing a short tutorial, showing to beginners how to get started in that rather intimidating field.

Myths and truthes about image processing

Although there are many tutorials and articles out there on this subject, they tend to discuss the most advanced and complex solutions. wich is – to be honnest – often very impressive, but can be very discouraging if you’re just getting started. It’s important to say that in many application, all this complexity is not even needed and just a camera like this one [add reference and link] and a xMega or ATMega AVR micro controller can do the job just fine.

There are many myths about image processing in embedded applications, here are some of them:

  • It will cost a lot of money
  • It needs at least an ARM micro controller running quite fast (> 60 MHz) or more powerful
  • It needs to be implemented via MatLab or other software running on a computer

Well, depending on your application, all of the above might be turn out to be false.

Think out of the box

Before we get into the “how” part, let’s talk about the “why”. I would like you to put aside the idea that using a camera in your project means taking full resolution pictures, and then start applying an Ad hoc algorithms! You need to think about the final aim, the issue you need to resolve. You need to think out of the box and look at the camera as much more than that, or much less! it all depend on your application. To explain my point of view let’s look at this applications:

Robotics competition play ground

Many robotics competitions imply the usage of line sensors, to follow guidance lines traced on the ground plane. Most competitors will build some kind of optical sensors array for this purpose. This solution tends to be the cause of many “bugs” due to the variation of ambient light which leads to faulty sensor readings. Trust me on this, I have been in that exact same place back when I was a student. Some other competitors will completely get rid of the lines traced on the ground plance, and will use very complex camera based navigation where they direct the camera to the front of the robot.Then they and have to deal with the analysis of all other moving objects and elements and implement various algorithms like shape recognition. So, what I’m mean is why not use the camera as a line sensor array?

Using a CMOS sensor as a better line sensor

You do not have to use the full resolution and full power of your embedded camera, it can be more interresting to focus the image processing on a specific part of the image. In this case, this would be a couple of horisontal lines. As the picture above shows, you can start with a high resolution image ( part A of the picture above ), reduce the resolution and get rid of color information (Part B) and simply convert the the pixels into binary information (Part C). This last step is quite important and can be done in many way, but the simplest is to just apply a simple algorithm like that:

if Intensity > theshold then
   pixel = 1 (white)
   pixel = 0 (Black)
end if

By the way, once you “degraded” the image to a couple dozen or hundreds of pixels, processing can become much less demanding an can be done by a cheap 8 bit micro controller.

Price? i am sure that a few dollars (< $7) CMOS camera will cost less than an equivalent array of LEDs transmitters and receivers, if you count all the PCB space it will occupy.

Another very simple example is using the camera as a proximity switch in rolling belts systems. If you think about it I am sure you can find many situations where a camera can be used as an “enhanced” optical switch. It does not have to imply very complex image post processing algorithms, but the few analysis you can make with the richer information comming from the camera can be enough to solve many problems you faced with a standard optical proximity switch.

I wish I could speak of all the ‘unconventional’ applications where we used tiny camera modules to provide reliable solutions to clients. In most of the cases, the client didn’t ever consider using a camera.

If there is one point I want to stress on, is that due to the increasingly dropping components costs, you can now start using embedded cameras and image processing where it does not intially belong. It’s all about degrading the image quality, to increase performance of your system! ironic isn’t it?

Enough talking, let’s get our hands dirty!

Schematic of the TCM8230MD CMOS camera connected to an ATxMega A3

For the sake of our example, we are going to use the TCM8230MD CMOS camera, easily available by many distributors like SparkFun. I am going to share an example (working and tested!) schematic diagram showing the electrical connection between the CMOS camera, the micro controller and different power supply sources. Below is the datasheet of the camera we are using:


You may think that this datasheet it not very well written, but trust me – in the field of CMOS cameras – this is a fairly well written datasheet! You have all the information needed to understand the function of each signal.

In order to go on with this article and understand the algorithm, I encourage you to study in the datasheet the following signals: EXT CLK, VD (Vertical Sync), HD (Horizontal Sync), DCLK and of course the 8 DOUT signals (D0 to D7).

In the schematic, you will notice that the AtXmega micro controller is providing the clock (EXT CLK) for the camera, this is a key feature of this design, as it will allow us to adapt the data rate of the camera to the capabilities of the micro controller.

Initializing the camera

In this section I am going to explain the most basic interfacing sequence using an AtXmega 256 A3 micro controller. Please note that the following code blocks are not complete functional source codes, but just blocks taken from a bigger complete project.

At some point, usually at system start-up, we need to initialize the camera. There are many ways to do that (a start up sequence is described in the datasheet). Below is an initialization code we have used before:

//Camera Initialisation sequence



    cam_sendBuffer[0] = 0x02; cam_sendBuffer[1] = 0x40;
    while (twiMaster.status != TWIM_STATUS_READY) {}
    //adjust saturation
    cam_sendBuffer[0] = 0x18; cam_sendBuffer[1] = 0x0;
    while (twiMaster.status != TWIM_STATUS_READY) {}

    //adjust contrast
    cam_sendBuffer[0] = 0x11; cam_sendBuffer[1] = init[6];
    while (twiMaster.status != TWIM_STATUS_READY) {}
    //adjust brightness
    cam_sendBuffer[0] = 0x12; cam_sendBuffer[1] = init[7];
    while (twiMaster.status != TWIM_STATUS_READY) {}
    cam_sendBuffer[0] = 0x03; cam_sendBuffer[1] = 0x22;
    while (twiMaster.status != TWIM_STATUS_READY) {}

What this piece of code does is :

  • Start the internal clock (generated by a timer on pin PC3). the line “CAM_ICLK_ON” is simply a macro that can be replaced by “PORTC_DIR |= (1<<3); “. In other words, PC3 is configured to output a clock via a timer (around 12MHz) and we simply set PC3 as output when want to “output” this clock.
  • Setup the I2C port – nothing special about that except that ATMEL calls it TWI (Two Wire Interface)
  • Send a series of commands to adjust the CMOS camera operation. each command is composed of two bytes. Please refer to the datasheet for the meaning of each of those commands.

After the execution of this code, and if a clock is provided to the camera, it will output pixel information all synchronized with VD, HD and DCLK signals.

Taking a picture

Now when you have the signals coming out from the camera you will face a quite frustrating problem! You will want to store the pixel information into a 2D array.  No matter how well written your code is it will be complicated to synchronize the time you read the pixels with the 3 signals that define the X and Y coordinate of that pixels. The problem is that if you want to take pictures fast enough you won’t be able to follow up with the clock , and if you slow down the clock, it will take too much time to “scan” the CMOS sensor and the image will be somehow blurry or distorted. Think of the time taken to clock out all the pixels of a frame as the shutter speed.

The idea we came up with to overcome this problem is to “stop” or slow down the clock from time to time when the micro controller can’t keep up. simple, really simple. So, here is the code we use to take a picture:

loop_until_bit_is_clear(VPORT0_IN,CAM_VD);    //wait for a
loop_until_bit_is_set(VPORT0_IN,CAM_VD);    //new frame
TCC0.CCA = 1;    //divide the clock by 2
                //normally TCC0.CCA = 0
for (y=0; < 96; y++)
        asm volatile(
        "cli"                    "nt"
        "L1: in r24, 0x0012"    "nt" //read portA_in (mapped to port0) to r24
        "sbrs r24,1"            "nt" // wait for the rising edge on cam HD (new line)
        "rjmp L1"                "nt" //loop
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt" 

        "L2:nop"                "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "nop"                    "nt"
        "in r24, 0x0016"        "nt"    //read pixel info from portB_in (mapped to port1) to r24
        "st %a0+, r24"           "nt"    

        "in r24, 0x0012"        "nt" //read portA_in to r24
        "sbrc r24,1"            "nt" // wait for the falling edge on cam HD (end of line)
        "rjmp L2"                "nt" //loop
        "sei"                   "nt" //Done, turn 
        : "e" (frame[y])
        : "r24"
    TCC0.CCA = 0; //go back to fast clock

The code above simply waits for a rising edge on VD signal then drop the clock speed a little. Then it will loop through the 96 lines of the image (we were using 128*96 image format). At each loop it will store the 128 pixels of the line. Note that for this code, we were only interested in the RED component so we only read one pixel via port B.

As explained in the code comments, we used virtual ports 0 and 1 to access port A and port B respectively. Virtual ports exist only in the Xmega family of the AVRs and allow to use the fast asm "In" and "Out" commands which execute in 1 clock cycle only.

At this point, it's up to you to work your way for your specific application.

That's it for this tutorial. As I said before, it's important that you read the datasheet before getting into those code snippets.

Below I have included an eagle project of the schematic above, it has the pcb foot print of all the components including the TCM8230MD CMOS camera.

Don't hesitate to leave comments below or ask more questions!


  1. Alex L September 17, 2013 at 2:00 pm

    Good write up. Was wondering if I could get some help, I’m interested in taking a very low resolution (24×24) RGB picture but I’m struggling to understand your assembler code (i’m rubbish at assembler code) and how it relates to the data sheet (particularly because you are only reading the red pixels). What would the code snippet look like if you read the blue and green pixels too? Thanks.

    • Ibrahim KAMAL September 17, 2013 at 3:15 pm

      Well… before i even try to dig into that assembler piece of code again (i hate it too!!), let me ask you, what are you trying to do exactly? because depending on your application, you may not need to have a high FPS..and if you don’t need a high FPS, you may just code that in plain C.

      • Alex L September 17, 2013 at 5:04 pm

        Thanks for your reply. My idea was to make a decorative rgb led display that just displays what the camera sees (kind of like a mirror) so I was looking to get maybe 10fps worst case scenario but more if possible. Future application would be recording a short video at 10fps to external flash to be played back on a similar or smaller display (this is just a pipe dream atm though). I thought assembler code was required to ensure data is recieved at the correct point of the data stream?

  2. Zohaib July 30, 2013 at 4:04 pm

    Can i use a common optical mouse as a line detector with PIC16F877A .. how can i interface it with PIC??

    • Ibrahim KAMAL August 3, 2013 at 9:37 am

      If you can get your hand on a decent datasheet, i think it can be done 🙂

      However, i can’t tell you how as i never done that before

  3. HexFever July 25, 2013 at 9:47 am

    Hello Ibrahim,
    Can a camera like the one
    be used for image recognition with PIC MCU? I am planning to implement image recognition for vehicles.

    • Ibrahim KAMAL July 25, 2013 at 11:07 pm

      If you can turn off JPEG compression and get the image in raw bitmap format, i would say yes.

      Dealing with a JPEG decompression is – IMHO – not an easy task for a pic mcu.

  4. DPK April 2, 2013 at 11:38 am

    Hello Sir,

    I want to take a high resolution pic from this camera and want to do on-board image processing; which RAM should I interface with the controller; SDRAM or SRAM? I am using an ATXMEGA128A1U controller with 8KB SRAM. Can you suggest some SRAM IC’s?


    • Ibrahim KAMAL April 6, 2013 at 6:14 pm

      mhh.. depends on the resolution of the picture you want to take..? how many colors?

  5. BKH March 13, 2013 at 7:31 am

    How can i grab a picture and then read pixel very slow?
    for exampe : use 12mhz clock and after ready a frame stop cmos camera and only read grabbed pixel with 1mhz clock and send data to an mmc…
    can i do it with this camera’s registers ?

    • Ibrahim KAMAL March 24, 2013 at 2:36 pm

      Well, you you can’t slow down the pixels coming from the CMOS because it don’t have any memory buffer to store the pixels. Think of the CMOS sensor and a very big number of ADCs. If you read the data two slowly, it will be equivalent to a too long exposure time…

      The only solution is to capture the data to some ram device (or on the SRAM of your micro controller if you have enough of it), then write it slowly in the mmc.

  6. Steve Greenfield February 1, 2013 at 8:17 pm

    Are these cameras addressable? I only need to capture an area a few pixels wide. I was thinking of using a CCD line sensor such as in a scanner.

    If the camera is adressable, I could just scan one horizontal line repeatedly.

    • Ibrahim KAMAL February 6, 2013 at 7:33 pm

      Well, you can capture just a portion of the pixels array, but not as simply as using a function that takes as argument “column” and “row”.

      This camera (which has a standard interface), outputs the pixels, one after the other, and provide a pulse for each new line and for each new frame.

      How much pixels you “read” fully depends on the code that interfaces with the camera.

      hope that helps.

  7. Ibrahim KAMAL November 1, 2012 at 11:47 am

    well, you could use a powerful mcu like a kinetis K70 (it just happen that i am working on it for a project now) and multiplexx the data lines using 3 input buffers or using a single CPLD/FPGA device.

    Also, 10Hz for embedded electronics is not so low.. what resolution is acceptable for you..?

  8. Ibrahim KAMAL September 9, 2012 at 10:16 am

    You’re most welcome 🙂

    To interface a mcu to PC it’s a whole other problem. However, given the required bandwidth of a CMOS camera, you would most probably have to use a hi speed USB interface, like the FT245 FIFO (google for it, the IC costs ~3€)

    Another solution would be to compress the frames before sending them over USB1.0 or serial UART, but i wouldn’t go that way personally…

  9. Alfiansyah August 11, 2012 at 5:18 pm

    Hi, Ikalogic,

    Based on your article above i want to study about camera interfacing . . For now The target I want to accomplish is Streaming the frame via Xbee to PC.
    I’m not expecting to go to 30fps, 1-5fps just fine. . and about the pixel, as tiny as 100×100 is fine for the beginning. .

    for the design i’m going to use Atmega328, Is it possible to get colored ones? or just greyscale?
    or not possible to stream any pic at all? :'(

    For the cam module i will go to use OV7670. according to the datasheet It use the same parallel interface as TCM8230 ones(CMIIW). . so the interface would be not so far away from cam module you used.

    Thanks for the replies in advance. 😀

    • Alfiansyah August 11, 2012 at 5:35 pm

      Ups, i recently just saw your replies on one question before mine,

      So if i’m using YUV (4:2:2) which is means 8 bit per pixel,
      in 100×100 pixel will need 10KBytes RAM per Frame

      on other hand ATmega328 only have 2KB of RAM, which mean totally unable to do that. . 🙁
      Unless i degrade the format to lower bit per frame from before.

      So, should I use STM32 instead?

      • Alfiansyah August 11, 2012 at 5:38 pm

        Oh I’m sorry Miscalculation 80KBytes per Frame. . .

        • Ibrahim KAMAL August 12, 2012 at 12:57 pm

          mhh, no you’re right
          100*100 = 10K pixels
          1 byte per pixel
          then, total ram needed is 10K * 1 byte = 10K bytes.

          The question is do you need to store the data on the micro controller? why not send it directly over xbee and not use a single byte of *precisous* mcu ram ?

          • Alfiansyah August 12, 2012 at 1:29 pm

            How to send directly over Xbee? I mean the camera i planning to use is not uart based output . . . but parralel ones like TCM you used.
            So i take an assumption that mcu is needed to take the image then send it via xbee. . .

            and my question: is atmega capable doing it? i mean is the uart fast enough to send the data before the new one came?

            Thanks for your replies 😀

  10. Yati August 1, 2012 at 3:16 am

    Very basic question here: Can you tell me how much memory one picture takes? The idea of using this camera with an arduino as a sensor is brilliant. Would it be possible to have it take a picture, (low res is totally fine) and send the image over an xbee network? I know next to nothing about cameras, but I have a bit of experience with arduinos and xbee mesh networks.

    Thanks for your work here. I just discovered it.

    • Ibrahim KAMAL August 1, 2012 at 8:46 pm

      Taking a low res image and sending it over zigbee is something i done a couple of months ago for a client. Wasn’t with an ATMEGA but with an AT32 though. Also i did compress the image further more: i used 4 bits only per pixel.

      In your case, the calculation is very easy: multiply the number of pixel by the number of bits (4, 8 or 16) and you’ll get the total memory needed.

      As far as i remember i was doing live streaming over zigbee at a rate of 2 or 3 frames per seconds. Not much, but enough!

  11. Jeff July 20, 2012 at 9:21 pm

    hmmm….. could this be used in a backyard weather station to detect cloud cover? And maybe even detect if it is raining out? (I have a Raspberry Pi with the goal of home-building a weather station with it that I can connect to the internet. Am exploring ideas for the various sensors).

    If anyone has thoughts on that, I’m all ears.

    • Ibrahim KAMAL August 1, 2012 at 9:03 pm

      of course.. why not?!

      And, as far as i can remember, the Raspberry PI has an image sensor interface..

  12. Alfiansyah July 19, 2012 at 3:02 am

    Well Written Sir!! 😀 Thanks

    It gives me a whole brand new concept in interfacing CMOS Cam 🙂

  13. Martin May 14, 2012 at 4:09 am

    Nice writeup, thanks.

  14. euphoria damage April 30, 2012 at 12:43 am

    Hello,amazing job there..
    is it possible to use a simple cmos camera with arduino?in order to take simple low res snapshots?thanks !

    • Ibrahim KAMAL April 30, 2012 at 2:28 am

      Hi, & thanks 🙂

      Although i have almost 0 arduino experience, i am sure it is possible. you only need:
      – an I2C (TWI) interface
      – 12 GPIO
      – some free ram space to store the image.

      you may have to start from the QCIF format (128×96) and store only 1 pixel every 4 pixels to reduce resolution further more. i don’t think the m168 used in arduino have that much RAM.. right? 🙂

  15. krazzy April 18, 2012 at 6:24 pm

    I want to use this camira in aspecific application can u help me

    • Ibrahim KAMAL April 18, 2012 at 10:24 pm

      Well, if you have specific questions, post it and if i can help i will 🙂
      (don’t expect me to send you a ready made code, or to write a program for you)

  16. krazzy April 18, 2012 at 6:22 pm

    what about interfacing this camira with pic microcontroler

    • Ibrahim KAMAL April 18, 2012 at 10:25 pm

      which one you want to use exactly? what is its the maximum frequency?

      • Carlos Moreno May 7, 2012 at 7:39 pm

        Is possible to use a PIC 18f25k20?? And take 10 or 15 fps?

        • Ibrahim KAMAL May 8, 2012 at 8:56 am

          How fast can it go and how much memory does it have…? and what resolution do you need?

          The short answer would be yes, it is possible if you can at some point (during the start up phase) provide a minimum of 12 MHz clock to the camera.

          Then you can reduce the clock to the cam as low as you wish and hence reduce the FPS according to the capabilities of your PIC.

Leave A Comment