03 November 2011

Tutorial 16e: Programming the Flash Memory

When we call the calibration values CALDCO_1MHZ and CALBC1_1MHZ in the code for our MSP430, those values refer to the address in SegmentA where the values are stored. They're not located in a place that's easy to find, however; they're stored in the linker command file, specific to each device. (If you're curious, take a look at the file in ${base dir}/Texas Instruments/ccsv4/msp430/include/msp430g2231.cmd, where your base directory is wherever you've installed CCS (likely in C:\Program Files for Windows). Be careful not to change anything in this file, though; this file is essential for proper programming of the G2231 device, as each is to their respective devices.) We could make a copy of this file and use it in our linker when we program the MSP430, but that requires a number of steps that are difficult to remember. All we really need is a way to tell our program to look at the specific address where we've saved the calibration values for our UART. All we really need is a pointer.

Pointers
We've not dealt with pointers up to this tutorial, but they're not really a difficult concept. Let's say we have code that declares the following two items:
   int var;
   int *ptr;
The first is what we're used to seeing; we declare a variable of type int (16 bits in MSP430), which will store a signed integer value. The second is also a declaration of type int, but rather than a variable, it is a pointer. The stored number in ptr doesn't refer to the value of a variable, but rather the address where a variable of type int is stored. (The actual size of the pointer in an MSP430 is 16 bits, so char *ptr; would also be a 16 bit value, even though it points to memory where an 8 bit value is being stored.)

In the C language, we have a way to refer to the address of a specific variable by using the & character. So from this point, if we assign ptr = &var; then ptr now points to the address where we've stored the int value var. Similarly, we can reference the value stored in a pointer with the * character. int var2 = *ptr; would store the value at address ptr in the new variable var2. If we want to assign a specific address to a pointer, rather than reference the address of a variable, we can use a literal of the form (int *)0x1234.

In the MSP430, an int variable takes 16 bits, while a char variable take 8. When we want to reference a specific word, we'll use a pointer of type int. If we want to reference a specific byte, such as the calibration values for the DCO, we'll use a pointer of type char. To use the DCO values we'll program in SegmentB, we can use the following:
    char *CALBC1_UART = (char *) 0x10BE;
    char *CALDCO_UART = (char *) 0x10BF;
or, if we'd rather not use extra memory, define in the header:
    #define  CALBC1_UART    *(char *) 0x10BE
    #define  CALDCO_UART    *(char *) 0x10BF

The Flash Memory Controller
Ok, now we've got a handle on how to reference portions of the flash memory. Now it's time to learn how to actually write to it. The MSP430 has a peripheral designed specifically to handle managing the flash memory called the Flash Memory Controller. Since it would be dangerous to manipulate the flash memory willy-nilly, it's set up like the Watchdog Timer, so that a key is needed to change any of its four control registers. The key value is 0xA5, conveniently provided in the header files as the value FWKEY. Here are the portions we'll need to know about:

FCTL1

  • FWKEY (bits 15-8):  The key is read as the value 0x96. Each register has this.
  • BLKWRT (bit 7): control bit for block write mode. We will be writing word by word, so we'll save this function for a future tutorial.
  • WRT (bit 6): write mode enable. Before we can write to any flash segment, this bit must be enabled.
  • MERAS & ERASE (bits 2 and 1): Control the mode of erasure. You can erase a single segment, the entire main memory, or the main memory and information segments both. (Depending on whether SegA is unlocked.)
FCTL2
  • FSSELx (bits 7-6): These bits select which clock source to use for the programming.
  • FNx (bits 5-0): These bits allow you to divide the source clock by any factor up to 64. (Divide by FNx + 1)
FCTL3
  • FAIL (bit 7): We don't want to see this bit set! For our purposes, it could mean the clock source has failed for some reason. If this happens, it needs to be reset by software before anything else can be done.
  • LOCKA (bit 6): I won't go into how this works; this bit controls whether SegA can be erased/written. If you find you absolutely must change something in this segment, you can read in the Family User's Guide to learn how to use this bit.
  • LOCK (bit 4): When this bit is set, the flash memory is completely locked, and cannot be erased or written.
  • WAIT (bit 3): Indicates when the flash memory is being erased or written.
FCTL4
  • This register is not available on all devices; it controls the 'marginal read mode'. We won't be using it here.
To do a successful programming of the Information Segment, we first set the clock for the controller. The actual frequency isn't terribly important, as long as it falls between 257 and 476 kHz. The default DCO setting of the MSP430 is about 1.1 MHz, and so a division of 3 (FNx = 2) works well, giving us roughly 370 kHz. We'll calibrate the DCO to 7.3728 MHz, so a division of 20 (FNx = 19) will give us about the same.

If we have anything we want to save in the segment, then the entire contents would need to be saved to a buffer. Likely, your SegB is blank, so we'll just proceed forward. Next, we set the controller to erase mode. We don't want to erase the main memory segments, so we only need the ERASE bit. We then clear the LOCK bit to allow changing the flash memory. When the controller is configured this way, we initiate the erase cycle by writing to any address within the segment. There's nothing magic about what value we write, but it is absolutely crucial that you have your address pointer pointing to somewhere in the segment you actually want to erase!

Once it has erased, we reconfigure the controller to write mode (not block write), and then proceed to write the values we need, either as bytes or words. If you preserved the prior contents, a better method is to change the contents to the new values and use block write mode. For this tutorial, we'll keep it simple.

Finally, when everything is written, we clean up, clearing the write mode bit and locking the flash memory.


The code I wrote for writing the UART DCO calibration to Segment B using the TLV format is found in uartcalG2231.c. Remember, if you run this, it requires the watch crystal. If there is anything already stored in SegB, it will be lost. If you'd rather store it in Segment C (0x1040 to 0x107F) or Segment D (0x1000 to 0x103F), change the address in the assignment instructions accordingly. If you choose to put in in Segment A (0x10C0 to 0x10FF), look in the Family User's Guide for instructions on unlocking the Segment. I do not recommend this, however, since we're calibrating a non-standard DCO frequency. Wherever you choose to store it, you'll need to remember the address. If you use my method here, that address is 0x10BE for the CALBC1 value and 0x10BF for the CALDCO value. Next time we'll write code that pulls these calibration values from memory and configures the DCO and TimerA to start setting up our software UART.


This tool will be very useful to you; keep in mind that though the Value Line devices only have one DCO calibration and don't appear to have any calibrations for the ADC, there is space for them! This process is not too difficult, you can upgrade your devices to have better calibrations, essential for accurate scientific work. For more information on the ADC calibration values that are often included in MSP430 devices, see the TLV chapter of the Family User's Guide.

02 November 2011

Tutorial 16d: Flash Memory and TLV

I've used a number of resources in preparing this tutorial; in addition to the Family User's Guide and the device datasheets, one of the most helpful was an application report titled MSP430 Flash Memory Characteristics.

We'd really like to hold onto our DCO calibration for two reasons. We'd like to be able to use it later, but we'd also like to do so without using up our limited code space with self-calibration. In addition, we may need it in an application where we don't have a crystal connected to the device. (Rather, we could program the calibrations in our LaunchPad with a crystal connected, and then transfer the chip to our intended application while retaining the calibration values in memory.) Programming the flash memory in the MSP430 is not difficult, but there are a few things that have to be done properly to protect your device. The steps done in programming make more sense when we understand how flash memory works.

The Science in Flash
A NOR flash bit is a transistor with a little pocket for storing charge.
The flash memory inside the MSP430 is what we call NOR flash (as opposed to the NAND flash that makes up your typical USB flash drive, for example). It works by trapping charge in an isolated region called a floating gate. The charge in the floating gate changes the transistor's threshold, and determines what value is read from the bit. Acting much like a switch, a positive charge in the floating gate makes it so the read voltage "closes" the switch, and we read a logic 1. A negative charge keeps the switch open, and we read a logic 0.

When we erase flash memory, each bit is put in a state where the floating gate is positively charged. Thus, a "blank" bit will always read 1; a byte of erased flash will read 0xFF. We can program the bit to a 0 by applying a high voltage to the control gate, which allows the floating gate to push charges through the drain. The resulting negative charge in the floating gate is retained, barring physical effects like quantum tunneling that take 100's of years to de-program the bit. (At 25°C, the typical lifetime of a NOR flash bit is on the order of 1,000 years!) Fortunately, the ability to generate this high voltage is built into the Flash Controller peripheral of the MSP430, so we are able to program the flash memory as long as the operating voltage on our microcontroller is at least 2.2 V.

The flash memory in the MSP430 is organized in chunks called segments. The Main Memory portion is divided into segments of 512 bytes. (That means there are 4 segments in the main memory of both the G2211 and G2231, which each have 2 kB available.) Each segment is further subdivided into blocks of 64 bytes. Additionally, an extra 512 bytes is included in each MSP430 device, divided into segments of either 64 or 128 bytes (one or two blocks, respectively). These segments make up the Information Memory portion of the device. The Value Line devices that come with the LaunchPad have four information memory segments of 64 bytes each, called Segment A–D.

The physical structure of the flash cells is important to us, as it has direct bearing on how we deal with it. First of all, when we erase flash memory, we can only erase an entire segment at a time! This means any time we need to change a 0 in a single cell to a 1, every cell in the segment has to be erased. When we program flash memory, the high voltage is applied across an entire block at a time, even if we are only programming one cell. This voltage causes stress to the flash cells, and so we cannot exceed a specified "cumulative programming time", typically 10 ms. Erasing releases the stress, and essentially resets our cumulative timer to 0. These stipulations affect our programming speeds and how often the block needs to be erased.

At the typical speeds we can use (between 257 and 476 kHz), we can program an entire block twice. (Note that in this case, "program" doesn't mean we can write any value-- we can change 1's to 0's twice. To change 0's to 1's, we are forced to erase the entire segment.) We cannot, however, program the same byte multiple times, even if we don't program any other bytes in the block. The rule of thumb, then, is if you reach 10 ms of programming time or write to the same byte twice, the block (and thus the entire segment for main memory) must be erased. This is a real pain, especially if we want to use main memory to store our calibrations. Because of this, I would recommend using the Information segments for storing anything you want to retain outside of the actual program in the device. The process you would use would be to read the entire contents of the block to a buffer (in RAM), erase the Information Segment, change anything necessary in the buffer value, and re-write the buffer to the segment. (It seems like a lot more to do than you're used to when using a flash drive, but in reality the same thing is happening there. Your computer just does a fantastic job of hiding it so you don't have to worry about it.)

Information Memory
A quick word about the various segments: segments B–D are each alike in dignity. However, SegmentA is set aside specifically by TI to store information that should be retained regardless of any programming changes. The factory calibrations, for example, are stored in this segment. As a result, this segment is locked, and you will have to set a particular bit before any instructions erasing or programming within this segment. In addition, to ensure integrity of the data stored in SegA, one word (two bytes) of it is set aside as a checksum, which I'll explain shortly. If you change anything in the segment, a checksum calculation will fail. Some programs rely on this, so if you really must change something in this segment, be prepared to deal with the consequences. Well, at least be prepared to recalculate what the checksum value will be. For our purposes in this tutorial, we'll use SegmentB instead, but we will program it in much the same way that SegmentA is structured. That way, if you choose, you can use SegA to store your calibrations, since that is where they are intended to be.

Memory Save
Button
Let's take a look at how SegA is put together. Fire up the debugger in CCS; it doesn't matter what code you're using, as we won't even be running the program. We just want to enter the debug mode. Once there, the default view has a panel on the right side with tabs for the code disassembly and memory; the memory tab shows the contents of the flash memory of the device. (If this window is not visible for you, you can find it by selecting Window → Show View → Memory.) The information memory is located (as specified in the datasheet) from address 0x1000 to 0x10FF. SegA starts at 0x10C0 and ends at 0x10FF. You can use this window to browse that region, or you can save a particular region of memory to a text file. To do this, click the save button, navigate to a file you want to save the data to, and enter a Start Address of 0x10C0 and a length of 0x20. (Note it specifies to give the length in words; a word is 16 bits in the MSP430, so there are 0x20 (32) words in the 64-byte segment.)

An example of the output from my G2231 device is here. The first line specifies where in the memory the dump comes from. The first line has one word–it comes from the two bytes at addresses 0x10C0 and 0x10C1. Take note that the lower address of the word refers to the least significant byte (LSB, as opposed to lower case lsb for least significant bit), while the higher address refers to the most significant byte (MSB). The last line has the 32nd word–from addresses 0x10FE (LSB) and 0x10FF (MSB). Most of the memory in SegA is obviously blank, as most of the entries are 0xFFFF. (Remember, when flash is erased it reads logic 1.) There are a few programmed words, however, so let's see what each one means. In the x2xx Family User's Guide (current revision as of this writing is slau144h), turn to the chapter on TLV, chapter 24. TLV stands for Tag-Length-Value. This is the format TI uses in SegA to store information. Basically, one word is dedicated to specifying the length of memory allocated to a specific type of data and what type of data is stored there. Table 24-1 gives an example of how this is done. The first word in the segment stores a checksum value. Note that it specifies the checksum as the two's complement of the bitwise XOR. If you start with the next word and XOR it with the third word, that result with the fourth word, and so on, then add it to the checksum value, it will add up to zero. Something like this:


int chksum = 0;
char passed;
int *i;
for (i=(int *)0x10C2; i<(int *)0x1100; i++) {
    chksum ^= *i;
}
if (chksum + *(int *)0x10C0 == 0)
    passed = 1;
else
    passed = 0;


Note: I'm not completely familiar with pointers just yet; I'm working that out in preparation for the next tutorial. If there's an error in this code, I'll correct it then. For now, think of it more as pseudocode.


Now that we understand how the segment memory is organized, let's look at what's inside it. Using my device, I have a checksum value of 0xB22C. The next word is 0x26FE. This means that the next 0x26 entries (38 bytes) are of type 0xFE. Looking in Table 24-2, we see that 0xFE refers to "TAG_EMPTY", meaning the next 38 bytes (or 19 words) are unused. Sure enough, the next 19 lines are all 0xFFFF. The next line gives the next Tag-Length entry: 0x1010. The next 0x10 entries (16 bytes or 8 words) are of type 0x10, which isn't specified in the Family User's Guide. I've submitted a question to TI support to find out about this. In any case, each entry is blank. The next Tag-Length listed is 0x0201, which means there's one word of type "TAG_DCO_30". Here we have the DCO calibration values at room temperature and 3 V. (Note that the Vcc value of the LaunchPad itself is 3.3 V, and remember that different voltage has a significant impact on the DCO!) There's only one entry, which we know has the values for CALBC1_1MHZ (0x86 on mine) and CALDCO_1MHZ (0xC4 on mine) as per the G2231 datasheet.


Reader Exercise: Using either your memory dump or mine, calculate the checksum of SegA and compare it to the value at address 0x10C0. Hint: xor'ing 0xFFFF twice has no effect; an even number of these lines can be ignored. When added to the stored checksum value, do you get zero? Calculate the two's complement by ~chksum + 1 and compare it to the stored value.


Preparing the Custom DCO Calibration
Here's the plan for our code: we'll find a custom calibration value for 7.3278 MHz and store it in SegB using the standard TLV coding set by TI. (You could do this in SegA if you wish, but since we're using a non-standard DCO frequency, I've opted to keep it out for now.) SegB is found in the memory range 0x1080 to 0x10BF. The organization of what we'll be writing will then be like this:
There's a typo here.. should be 0x38FE. I will fix the image soon.


We use the crystal to find the calibration value for 7.3278 MHz. This value is put into the above table to calculate the checksum: 
chksum = 0x38FE ^ 0x0201 ^ {calibration values}
       = 0x3AFF ^ {calibration values}
The value stored at 0x1080 is then ~chksum + 1.


Next, we erase SegB, and write in the following addresses:

  • 0x1080: two's complement of chksum
  • 0x1082: 0x37FE
  • 0x10BC: 0x0201
  • 0x10BE: 0x{CALBC1}{CALDCO}
This tutorial has ended up pretty long, so we'll end the discussion here. The next tutorial will review the registers in the MSP430 Flash Memory controller and describe how to program this information to the Information Memory.

Reader Exercise: The Value Line devices do not come with the other standard calibrations: 8 MHz, 12 MHz, and 16 MHz. In the chapter on TLV of the Family User's Guide, we see that the locations of these calibrations are standardized for SegA to appear in the order TLV Tag, CAL_16MHz, CAL_12MHz, CAL_8MHz, CAL_1MHz. The data in SegA for the Value Line devices doesn't leave room for these with the other three calibrations left blank, so the entire segment structure needs to be shifted. If we want to add these without loosing any of the remaining structure, how will the segment be set up? Draw up a table similar to that used in this tutorial to map out the segment structure, and show how to calculate the new checksum value based on this table.

23 October 2011

Tutorial 16c: Accurate Clocks

Perfectly synchronized clocks can measure the bits
anywhere in the middle.
Clock Accuracy
So just how accurate does a clock need to be for UART? First, consider a perfect pair of clocks, each operating at exactly the same frequency. In this case, it doesn't much matter where we measure the incoming bits, as long as we measure after the bit has changed to the next one. If there's some error, however, a slow clock will measure later and later, until it missed a bit completely. A fast clock will likewise measure earlier and earlier, until it measures the same bit twice. Not knowing before hand if your clock is fast or slow, it makes sense to shoot for the middle of the bit; that way you have as much time as possible before the error takes over and you make a mistake.

Errors can add up very quickly when there are
small differences between clocks.
Looking at the plot on the left, using clocks that are off by only 6%, notice where the first problem occurs: bit 9 is completely skipped! Using these clocks, we can't even reliably send one frame of data! (By the way, this type of transmission error is classified as a frame error.) My choice of 6% for this plot isn't arbitrary; the error quoted in the datasheets for the MSP430 give the calibrated DCO frequencies (calibrated, mind you; not just the DCO in general) a tolerance over temperature and any other changes of ±3%. If we use calibrated DCO as the clock for both sender and receiver, we could feasibly have as much as a 6% error, making UART transmission completely unreliable!

In reality, not all hope is lost; this is a worst case scenario, and likely the clock in your computer (assuming you want to communicate with it instead of another MSP430) is more accurate than that. A total of 5% tolerance between the clocks would, on average, encounter a frame error on bit 11, and 4% tolerance on bit 13, so as long as we have better tolerance than that and have a short gap between frames to "resynchronize", we should be able to communicate. Note this does mean we are limited in the frame sizes we can use reliably, and our actual data transmission rate will be a little less since we need a recovery between frames. If you want fast communication, you need an accurate clock!

Crystal Oscillators
The best option for accuracy in a clock is to use a crystal. The watch crystal that comes with the LaunchPad is accurate over a wide temperature range to 20 ppm (0.002%)! There are some disadvantages, however: crystals take a while to stabilize, and use more power. In addition, at 32,768 Hz, we can't achieve very high transmission rates. Typically we want at least 16 clock cycles per bit to accurately catch the start bit and start sampling at the center of each bit. Using that rule of thumb, the highest bit rate we could attain with the watch crystal is 2048 baud. Larger MSP430 devices allow for high frequency crystals, but the G2xx series that are compatible with the LaunchPad do not. However, if we're careful, getting transmission rates up to 9600 baud with the watch crystal can be done.

Soldering the crystal to your LaunchPad might seem a daunting task with such a small part, but in reality it's not terribly difficult. A little bit of patience (mostly in the form of a little piece of masking tape) is all you need. Aside from a soldering iron and solder, of course. If you have not already put a crystal on your LaunchPad and would like to, there are a number of good demonstrations of the technique for soldering the crystal to the board available on YouTube. For now, I'll assume you've successfully put it on.

You may have noticed a couple of empty pads near the crystal for capacitors. For a crystal to oscillate at the right frequency, it needs to see a particular capacitance to ground. If the capacitance is off, the frequency may be off, or the crystal may not oscillate at all. The crystal included in the LaunchPad kit wants to see 12.5 pF. The MSP430 crystal inputs also have user-selectable capacitances internal to the device. The user can select from 1, 6, 10, and 12.5 pF as the effective capacitance seen by the crystal. (The device defaults to 6 pF.) The selection is done by the two XCAPx bits in the BCSCTL3 register.

If for some reason you need a capacitance other than one of these, you can solder the proper capacitors to the pads on the outside. Unfortunately, it's not as easy as putting 12.5 pF capacitors on the pads; the capacitors you put on will be in parallel with the capacitance from the traces and the chip itself. The formula for calculating the right load capacitors is: C1 = C2 = 2*C_Load - (Cp + Ci). C_Load would be whatever capacitance the crystal expects to see, Cp any parasitic capacitance from traces etc., and Ci the Capacitance of the MSP430 device. The last two terms can be assumed to be whatever is set in XCAPx. So for a crystal that wants 18 pF, you would want to put 30 pF capacitors on the board (using the default 6 pF).

Most of what's needed to use the watch crystal is set as the default values in the BCS+ module. To use the supplied crystal, you really only need two lines:

BCSCTL3 |= XCAP_3;       // 12.5 pF for LaunchPad crystal
__delay_cycles(55000);   // let crystal stabilize

The first line sets up the proper capacitance for your crystal, and should be changed if you're using a different capacitance value. The second line lets enough time pass for the crystal to stabilize at the operating frequency. How long do you really need to wait? Typically you need a few hundreds of milliseconds. The above code will wait for 55000 clock cycles; at the default DCO of 1.1 MHz, that's about 50 milliseconds. Depending on your application, you can wait longer if necessary.

Self-Calibrating the DCO
If you're paying attention, you might have spotted another problem with the watch crystal: how do we get a standard clock rate from the crystal? 9600 baud is 3.41 clock cycles. 1200 baud uses more clock cycles, so is more reliable, but it's still 27.31 clock cycles. The closest division we can get is 27 clock cycles, which would correspond to 1213.6 baud, an error of 1.1%. It seems we've lost a lot of the accuracy we gained using the crystal! There are crystals available that divide perfectly into the standard baud rates. A 7.3728 MHz crystal, for example, has exactly 768 clock cycles per bit for 9600 baud. If you use an MSP430 device that allows for a high frequency crystal, this is an excellent choice for serial communication at standard rates. Another option, however, is to use the DCO.

But wait, didn't we just argue that the calibrated values of the DCO are too large to be reliable? Yes, but under certain circumstances we can do better. For one, the error margins quoted in the data sheet cover both the entire temperature range where the MSP430 can be used-- from -40° to 85°F-- and the entire range of voltages at which it can operate-- from 1.8 to 3.6 V. Generally we won't be operating in such extremes. Even if we are, as long as the temperature and operating voltage aren't going to change too much during operation, we can recalibrate the DCO to that particular configuration! The accuracy of the clock will be much better than the quoted 3% in this case; the datasheet even specifies that from 0° to 85°F, the calibration is good to 0.5% at 3 V. (Obviously a consistent voltage is the real key to a stable DCO.)

Calibrating the DCO is quite simple, but it requires a very accurate clock to compare the DCO against; this is a job for a crystal. The idea is simple-- use the crystal to time an accurate interval (say 1 s.) We know how many oscillations should occur for a given frequency in that interval, so we adjust the DCO until we get as close to that as we can get. Then we save the values for DCOCTL and BCSCTL1, and we have an accurate calibration for our clock! Here's how all the magic happens:


void Set_DCO(unsigned int Delta) {    // Set DCO to selected
                                      // frequency
  unsigned int Compare, Oldcapture = 0;


  BCSCTL1 |= DIVA_3;                  // ACLK = LFXT1CLK/8
  TACCTL0 = CM_1 + CCIS_1 + CAP;      // CAP, ACLK
  TACTL = TASSEL_2 + MC_2 + TACLR;    // SMCLK, cont-mode, clear


  while (1) {
    while (!(CCIFG & TACCTL0));       // Wait until capture     
                                      // occured
    TACCTL0 &= ~CCIFG;                // Capture occured, clear 
                                      // flag
    Compare = TACCR0;                 // Get current captured 
                                      // SMCLK
    Compare = Compare - Oldcapture;   // SMCLK difference
    Oldcapture = TACCR0;              // Save current captured 
                                      // SMCLK


    if (Delta == Compare)
      break;                          // If equal, leave  
                                      // "while(1)"
    else if (Delta < Compare) {
      DCOCTL--;                       // DCO is too fast, slow 
                                      // it down
      if (DCOCTL == 0xFF)             // Did DCO roll under?
        if (BCSCTL1 & 0x0f)
          BCSCTL1--;                  // Select lower RSEL
    }
    else {
      DCOCTL++;                       // DCO is too slow, speed 
                                      // it up
      if (DCOCTL == 0x00)             // Did DCO roll over?
        if ((BCSCTL1 & 0x0f) != 0x0f)
          BCSCTL1++;                  // Sel higher RSEL
    }
  }
  TACCTL0 = 0;                        // Stop TACCR0
  TACTL = 0;                          // Stop Timer_A
  BCSCTL1 &= ~DIVA_3;                 // ACLK = LFXT1CLK
}

This code comes from the dco_flashcal.c example code available with most of the MSP430 devices. The example code file for the G2xx1 devices seems to not have it; I copied this from the F21x2 examples and changed TACCTL2 to TACCTL0 to be compatible with LaunchPad devices.

It's a bit of code to sort through, but it turns out to be straightforward. ACLK is configured to use the watch crystal divided by 8-- 4096 Hz. The timer is set to capture mode, triggering off a rising edge of CCI0B. (From the G2231 or G2211 datasheets, this corresponds to ACLK. If these terms seem confusing, review the tutorial on the capacitance meter which used the capture mode.) The timer itself is running off SMCLK, sourced by the DCO. If we want to calibrate 2 MHz, then in one clock cycle of ACLK, we expect 2 MHz/ 4096 Hz = 488.3 SMCLK cycles. We pass the value 488 to this routine, which starts the clocks and timers. When a capture occurs, it checks to see if more or fewer cycles of SMCLK have happened, and adjusts DCO and RSEL accordingly. It repeats this, until it finds the configuration that returns exactly 488 cycles in the interval. The values in DCO and RSEL are then the calibration values we want to save; we just look at the DCOCTL and BCSCTL1 registers and save their values for future use.

This routine is used in the example code found in DCOcalibrate.c. Try it out (with the crystal soldered on, of course), and see what values are obtained for DCOCTL and BCSCTL1 for each frequency it calibrates. If you happen to have an oscilloscope, measure them on P1.4 to see if they're right. (The code finishes on the last calibration done; you can modify the code to end with the one you want to measure, or add code to change to the frequency you want to see.) You can write these values down for future reference, but next time we'll look briefly at writing to the flash memory in the MSP430 so you can save it for use later on!

Reader Exercises:  How good is the calibration done in the factory? Modify the code to find the calibration values for 1 MHz. How do they compare to the values stored in CALDCO_1MHZ and CALBC1_1MHZ? It seems many have reported that the 1 MHz calibration from the factory, at least for early batch runs of the value line devices, is closer to 980 kHz.


How much does temperature affect the results? Place your LaunchPad somewhere warm for a while (or cold; a freezer might not be best, though-- too much water around!) and re-run the code. How much difference is there in the calibration values?


Could you use a calibrated frequency to go the other direction and measure the crystal frequency? Imagine doing an experiment to see how the four XCAPx settings might affect the crystal. Which ones oscillate? How much does the frequency change if you use 10 pF instead of 12.5 pF? See if you can write some code to find out!

Tutorial 16b: UART Definition

Universal
So what exactly makes the Universal Asynchronous Receiver-Transmitter universal? The UART has a long history, starting way back in the 1840's with some of the first telegraph systems. Back then, when the telegraph key was held down, a current would flow in the receiver, pushing a stylus into a strip of paper, leaving a "mark". The Morse code signals sent would then visually display on the paper, making it simple to read the transmitted message. Of course, it didn't take long before the operators got so used to hearing the patterns of clicks that they found they could just as easily listen to the message as write it on a piece of paper, and sounds began being used instead of a mechanical system. Of course, the sounds would turn on when a current was flowing in the receiver, so the signal was still divided into "marks", where current was flowing, and "spaces", where it was not. In other words, the "standard" had changed from a stylus on a paper to listening by ear, but the "protocol" of Morse code stayed the same.

Morse code was a phenomenal technology change, making it possible to send messages easily over very long distances, particularly when radio was implemented, and wires connecting between the source and destination were no longer needed. While that was happening, someone realized a financial benefit could be obtained by using the technology, and hence was born the first ticker tape machine for the stock market. These machines changed the technique slightly. Instead of using special codes for each character, a series of pulses would be sent to turn a printing wheel from its current position to the next letter to be printed. A special pulse signal would instruct the printer to stamp the current letter onto the tape. As technology improved, rather than a rotary printing wheel the Baudot code was developed as a new protocol, equating particular pulse patterns to particular characters.

Like telegraphy, the teletype grew with the new technologies of radio and, in particular, the computer. Teletype machines became useful not only as a means of communication between people, but also as an interface to early computers. Instructions could be sent by typing a particular pattern of keys, sending a particular pattern of pulses to the computer. Results would be sent back with a similar pattern of pulses to a printer, which would translate them back to the letters and numbers we needed to understand them. But even though the technologies had taken different paths, both came from the same beginnings with Samuel Morse. As such, some characteristics and naming conventions stuck; in particular the use of "mark" and "space" to designate when current was flowing (logic high) and when it was not (logic low).

In computers, a change was made from detecting current flow to just measuring a voltage. Some of the conventions continued, which is why in the RS-232 standard has logic high as a negative voltage. The negative voltage originally would open the current of a teletype machine to produce a "mark" signal. A positive voltage would cut off the current, producing a space. As it turns out, different circumstances (and sometimes just different companies) would require a slightly different standard for sending serial data. connectors and voltage levels for mark and space wouldn't be the same, but the protocol (the way of encoding the characters in pulses) used would carry over. In particular, the use of transistors made it easy to create a universal system that could be understood by any computer or device, as long as each device had something to convert the transistor logic (TTL) signals into whatever standard they expected. The protocol was changed as well, using ASCII to encode the data into digital information. Thus was born the Universal Asynchronous Receiver-Transmitter. (Note that TTL uses positive voltage, be it 5 V, 3.3 V, or anything else, for "mark" and 0 V for "space".)

Asynchronous
Now that it's clear what makes the UART universal, let's look at what is meant by asynchronous. In radio, you can send a message (by voice, digital code, morse code, or whatever), but the message cannot be received unless someone is listening. For serial communication, it requires more than just a signal saying data is ready to be transmitted, unfortunately. Imagine a system where I'm going to send a message to you by holding up a giant sign. We agree before hand that at 1:32 PM I will put the sign up, and at the designated time you look in my direction, see the sign, and read the message. This would constitute a parallel type of transmission--each letter was visible all at once. Now let's say I just don't have access to a big enough piece of paper to write the whole message, but I can send you one letter at a time. So we agree that every 10 seconds, I'll hold up a new letter. You come at the specified time and see me hold up the first letter, which you record on a piece of paper. Every 10 seconds you look back, and I'm holding a new letter up and you record it. This is serial communication. But what happens if one of us has a bad clock, and it's saying 10 seconds are up when, say, 12 seconds have passed. Eventually the mismatched timing causes you to either record the same letter twice or miss a letter completely, depending on whose clock is faster. In order to ensure the message gets through, our clocks need to be synchronized.

There are synchronous methods of serial communication, including both SPI and I2C, which we'll address in the future. These methods have synchronized clocks by using the same clock for the sender and the receiver. The disadvantage is that sharing a clock means another wire. It's clear from the history that developed the UART why a clock signal was not included along with the message; instead, both the transmitter and receiver agree before hand at what rate the data will be sent. This allows sender and receiver to have their own clocks, which don't have to be synchronized in terms of when the second hand ticks, but it does require that each person's clock is accurate. Asynchronous communication simplifies the connection by not needing a second signal in parallel with the data, at the cost of needing an accurate way to time intervals between data.

Receiver-Transmitter
Enough history; let's look at how the UART actually transmits information. Whatever protocol we may be using, we are able to encode data as a series of 1's and 0's. We can encode a number as its binary representation, or we can encode a character as a particular binary number. In any case, we have a certain number of 1's and 0's to send. In a UART, we also add on at least two extra bits: one to designate the start of a new set of data, and one to designate the end. These start and stop bits with the data bits in between constitute what we call one "frame" of data. Using ASCII encoding, often 7 bits of data are sent. In addition, a 10th bit would be sent between the data and the stop to help determine if the data received was correct or not. If the sender and receiver agree that every frame will have an even number of 1's in it, then this "parity bit" would be 1 or 0, depending on the number of 1's in the rest of the message. The receiver could then look at the 7 data bits and parity bit, add up the number of 1's, and if the total number is even be confident that they received the right message. In 8 and 16 bit systems like a microcontroller, it could also make sense to send data in 8 bit segments instead. Often times no parity bit is included in this case, to keep the total data length of each frame to 10 bits. The compromise is that there is no way to check for errors in the transmission, but generally error checking is only necessary under particular circumstances.

Let's say we want to encode the letter "D" using 7-bit ASCII encoding and odd parity. The ASCII code for the letter "D" is 0x44, or 0b1000100 in 7-bit binary. Now we face a choice: do we send the least significant bit first, or the most significant bit first? The typical protocol used in UART is what we call "little-endian", meaning we start with the least significant bit. (This makes sense when you think in terms of a shift-register; the SR in a UART pushes bits from high to low, so you send the lowest bit first.)

Representation of the ASCII character "D" in UART TTL.
UART uses logic high as the default (or idle) state. So to start a message, we want to change from high to low. Thus, our start bit will be a 0. Likewise, to stop we want to go back to the default state, so the stop bit is 1. So far, our total encoding is now "00010001x1", where x represents our parity bit. We want odd parity, and there are two 1's in the representation for "D", so we set this bit to 1 to ensure an odd number in the whole message. Our final message is the 10 bit stream "0001000111". (If we do this with 8-bit data and no parity, we would have "0001000101", this time the 2nd to last bit being the most significant bit in the 8-bit code 0b01000100.)

Hopefully this gives you a clear picture on how UART works. There are really no limitations on the protocol you use, so long as you have a start and stop bit. As long as the sender and receiver agree on what goes in the middle and at what rate the information comes, it will work. The standard protocols such as those illustrated here are convenient as it's very simple to use a computer to read the data coming from the microcontroller. In addition, keep in mind that there are standard speeds for transmitting bits (bits per second (bps) or baud), which are leftover from the old teletype days. In any case, many systems are limited to using these rates, so it's often a good idea to standardize to them. If you have your own internal system, use what ever baud rate is convenient, but do remember that a lot of computers and devices may expect one of the more conventional rates, like 300, 1200, 4800, 9600, or 115200 baud.

We've identified one of the key things we'll need for a successful UART: a good clock. Next time we'll look at some options, their limitations, and how to implement them.

Reader Exercises: Using 7 bit encoding with even parity, what would the bit stream look like to send the character "j"?  How about the character "k"?
Using 8 bit encoding with no parity, what would the bit stream look like to send the newline character, "\n"?  How about the character "&"?

17 October 2011

Tutorial 16a: Getting Serial

In order to do an actual scientific experiment using the MSP430, we need one more tool. To be fair, we could do with what we've covered so far, but it requires constant (or at least regular periodic) monitoring of the equipment, and manual recording of the data displayed on the LCM. No, what we need is a way to automatically record the data when it is taken.

There are two different paths open to us at this point: the MSP430 has on-board flash memory. We could use it to record multiple measurements. The other, more complicated path is to learn how to communicate between the LaunchPad and a computer via USB. There's some elegance in starting with the former, as our focus to this point has been on the LaunchPad itself, but unfortunately we'd need a way to transfer the data from the flash memory to a useable location anyway, which more or less requires connection to a computer. So even though it will delay getting to some of the cooler things we can do with the MSP430, it's time we tackle serial communication. Once we have this piece mastered, we'll start a little science experiment that will take me a few days/weeks to complete. During that time, we'll begin looking at recording to flash, communicating with external peripherals, and how to put all the pieces together for remote data collection. We'll also start looking at alternative power systems, system control, and other great things that will completely open the field of what's possible with a microcontroller. The future looks bright; but first we'll have to tackle this difficult task.

Well, things aren't really so bleak... serial communication isn't that complicated. In fact, most MSP430 devices have peripherals built in already for that very purpose, making it simple to do. However, of the two devices that come with the LaunchPad, only the G2231 has one of these peripherals, and it only has two modes of operation, conspicuously missing the one we really need first: the Universal Asynchronous Reciever/Transmitter, or UART. So, instead, we are going to turn to learning to implement this functionality in software.

Fortunately, there's some real advantage to this; a solid understanding of how serial communication works helps us understand how to process and record scientific data. In fact, when we get to the USI/USCI peripherals, looking at other modes of communication such as SPI and I2C, we'll take the time to understand how these methods send data. (The particular implementation of a serial communication system is called a protocol. There are even more protocols available, including Bluetooth, Wi-Fi, and ZigBee, which are cool things we'll tackle some day!)

You might be asking, "Why are we going to rehash software UART? Lots of people have published articles about it already, and lots of code and examples are available." Well, I'd respond that there are two reasons. The more philosophical reason is that you become a better scientist when you understand how the tools you're using work; Einstein once said you don't really understand anything until you can explain it to your Grandmother. The more practical reason is that none of the articles I've perused give much explanation to why the code is set up the way it is. That's our goal here: by completely dissecting the software UART, we learn how serial communication works, and get a thorough example of using the MSP430 peripherals to our advantage in getting jobs done. We'll also do a very thorough job, starting with just transmission (I guess technically it would be UAT), then moving to just reception (likewise UAR), then designing a full-on UART transceiver. Along the way, we'll talk a bit about crystals as well as learn about calibrating our DCO. We'll even talk about saving DCO calibration to the flash memory, and introduce the concept of a checksum. (So we'll see a little bit about writing to flash memory soon after all!)

If that sounds like a lot to cover, it is. I'll do my best to keep the posts coming regularly and quickly, so that we can move on to more advanced ideas soon. There is motivation for approaching this topic in this way at this time, however. These tutorials have always been designed as notes from my own learning. As a result, sometimes the methods/styles have been a little disjointed, but one of the goals of this blog was to put together a curriculum that could be used to teach science students in a one-semester course on microcontrollers. (After graduation, I'll gather, edit, and format these tutorials into a book that can be downloaded for just such a purpose.) I think the material we've covered to this point fits about a one semester course very well, so think of this tutorial as the final project for the course. It's a bigger concept that will take a while, but will draw on our knowledge from the other peripherals and skills we've learned. The fact that we'll introduce some new ideas along the way will add to the sum total of knowledge taken away from this course. So strap in; we're going to start the final for MSP430 101!

16 October 2011

Question for Readers

We're fast approaching the end of what I would call the 'basic tutorials', and the point where I'll move on to interfacing with the real world and other cool toys and devices. Unfortunately, that means I need some cool toys and devices. In brainstorming ways to help fund this little hobby, a friend suggested to me that I might consider using Google AdSense on this blog. If I were to do so, I would want it to be unobtrusive, as my intent is not to sell things for other people. How would you feel if I were to do this? Would I be better off thinking of another way to raise a little hobby revenue?

Tutorial 15b: Using ADC10

Like the Comparator_A+ peripheral, ADC10 has a wide range of operating modes and features. It can also integrate with other peripherals (such as Timer_A, of course), making it a very powerful tool in scientific measurements. Today we'll look at basic configuration of ADC10 in preparation to do a full scientific experiment using the MSP430. Keep in mind that this tutorial requires a device with the ADC10 peripheral, such as the G2231. The G2211 chip that comes with the LaunchPad will not work in this tutorial.

First, let's examine some of the features of the ADC10 peripheral. ADC10 of course requires a clock, and can source from any of the three clocks in the MSP430 (and subsequently from a crystal, DCO, or VLO). In addition, ADC10 comes with its own internal clock that can be used independently of the system clocks. This clock is typically in the 5 MHz range, but is uncalibrated and thus varies from chip to chip, as well as with operating voltage and temperature. A major advantage to the internal oscillator is that it can remain in operation even when other clocks are powered down in an LPM.

ADC 10 can connect to up to 16 different inputs. Typically, 8 of these are external inputs, 4 are internal inputs, and 4 are other references (on some devices, they are extra external inputs for a total of 12). The G2231 device has 8 external inputs (on each of the pins in P1), and also has a temperature sensor built into the chip as an internal input (in addition to the other three internal inputs, which have to do with voltage reference comparisons).

Like the Comparator_A+, ADC10 requires a reference voltage for operation. In fact, it can operate with two reference voltages for the upper and lower bounds of conversion. The upper reference can be anywhere between 1.4 V and Vcc (up to 3.6 V). The lower reference can be between 0 and 1.2 V. There are two references available inside ADC10 at 1.5 V and 2.5 V, and it can also use Vcc in addition to an external reference.

Finally, the ADC10 also has 4 operating modes. Two of these modes sample only a single channel. The other two modes cycle through a specified set of the 16 possible inputs. Each single/sequence type can be done only once, or repeated. (Note: in a sequence mode, you must use the inputs in order. If you want to sample 3 inputs, you must use A0, A1, and A2. Unfortunately, the only way to sample arbitrary inputs is to use single channel mode and change channels in software.)

That covers the bulk of the dizzying ways to configure the ADC10; it's a lot to sort through, so keep in mind that we're only going over it to have the different things you can do in the back of your mind. The best way to learn how to use all of the features is by example, so let's look at a simple example by modifying the capacitance meter project to a voltage meter. This meter will be restrictively useful, as it will only be able to measure voltages between 0 and 3.3 V, but it will illustrate the idea. We'll display the output on the LCD as before, but the code can easily be modified to pause in the debugger to find the result as was done at first with Comp_A+ if you don't have an LCD. For the LCD display, we'll use the single-channel mode and repeat the measurement in software, which makes it easier to use the debugger to see the result.

The x2xx User's Guide gives a set of diagrams to explain the process used in each of the four modes. For example, in the repeat single-channel mode, the peripheral is turned on and enabled. The ADC10 is triggered to start a conversion, which is stored upon completion. If interrupts are being used, the flag is set, and the ADC10 returns to on of three steps, depending on just how we set it up. Keep in mind this all happens within the ADC10 module itself, leaving the MSP430 free to perform any other actions it needs to. You can use the ADC10 interrupts to do something with the code after samples are taken.

Our code will instead use single-channel mode, which is very similar to the repeat single-channel mode, but without the repeat part. =) I've chosen this mode because I won't be using a low power mode, and it's easier to coordinate timing so that a sample isn't taken and finished while waiting for the LCD to update. While a new conversion is occurring, the code will trap in a loop before writing the measured sample to the LCD for display. Once that's finished, a new conversion will be started to update the measurement.

There are 8 registers associated with ADC10; of these, 4 are used to configure the peripheral. One is used to store the individual samples, and three are used to control transferring the sample data for storage. (More on this later; for now we're going to keep it simple and only worry about 5 registers!)

While this sounds like a lot of configuration, fortunately two of the registers are used solely for configuring the inputs. ADC10AE0 enables the ADC function of the external pins being used. (This is necessary, because P1SEL changes the operation of those pins to a function other than ADC; since it's a binary value, P1SEL can only configure two different operations. This register frees up those pins for uses other than just ADC!) ADC10AE1 performs a similar function, but only for devices with more than 8 analog inputs.

We'll look at the other two configuration registers in more detail. ADC10CTL0 handles some of the base configurations of the peripheral-- the voltage references, sampling time and rate, and handling power and interrupts for the ADC. ADC10CTL1 controls the inputs, clock, mode, and data formatting. Here are the essential pieces for each register (we won't cover all of them today):
ADC10CTL0
  • SREFx (Bits 15-13): These select one of 8 different configurations for the upper and lower references for the ADC.
  • SHTx (Bits 12-11): These select 4 different sampling times for the ADC. The voltage is held constant during conversion by charging a capacitor; these control the amount of time you allow for charging. Obviously more time ensures a more accurate sample, but limits the sampling rate achievable by the device and risks having the voltage being measured change during the sampling time. You can select 4, 8, 16, or 64 clock cycles (of ADC10CLK).
  • REF2_5V, REFON (Bits 6,5): Selects between 1.5 V and 2.5 V references and turns the reference on/off.
  • ADC10ON, ENC,ADC10SC (Bits 4,1,0): Turns on the ADC, Enables Conversion, and Starts Conversion respectively.
  • ADC10IE, ADC10IFG (Bits 3,2): Interrupt enable and flag.
ADC10CTL1
  • INCHx (Bits 15-12): In single channel mode, selects the channel to sample. In sequence mode, selects the highest channel to sample.
  • ADC10DF (Bit 9): change between straight binary data and 2's complement data.
  • SSELx, DIVx (Bits 4-3,7-5): chose the clock source and divide the clock frequency by 1-8.
  • CONSEQx (Bits 2-1): Select the sequence mode.
  • BUSY (Bit 0): a read-only flag that indicates when the ADC is in the middle of a sample/conversion cycle.
 That's a very brief overview; we can't cover all of the features in detail in a reasonable introductory tutorial, so we'll examine more advanced features in the future as they come up. In the mean time, read the User's Guide and documentation to understand more of what all of these do.

Last of all, we'll mention the ADC10MEM register. When conversion takes place, the value is stored and read from here. If ADC10DF is cleared (value 0), we can read this straight away: 0x00 is equivalent to the lower reference, 0x3FF is equivalent to the upper reference, and the intermediate values are a line between the two points. If ADC10DF is set (value 1), the value is stored in 2's complement. This can be useful for transferring data in some configurations, but we'll not need it today.

That does it for a brief (but long!) summary of the basics. Let's look at the simple volt meter now. The code for this project can be seen in VMeterG2231.c. It will require the simple_LCM library from the previous tutorial. There's very little that's new here, and the code should be clear by itself. It uses input A1 on P1.1. You can test the code by using a potentiometer connected between Vcc and ground and connecting the wiper to P1.1. When you turn the potentiometer, you should see the corresponding value change on the LCD.

So much for the basics; next time we'll look at how to store the data for later analysis and start working on an actual experiment.

Reader Exercise: How can you use this same code to measure a larger voltage range? Hint: a simple way to do it uses only two passive components. A trickier task is to be able to measure positive and negative voltages; can you think of a way to do this even though the MSP430 can't use a negative voltage reference? Hint: an op amp might help.

29 September 2011

Tutorial 15a: Analog Signal Conversion

Looking back on the past tutorials, we really only have two more major peripherals to learn. Today, we'll start taking a look at the Analog to Digital Converter (ADC), then learn about serial communication methods in preparation for an actual scientific experiment.

A corrolary to the Analog to Digital Converter is the Digital to Analog Converter. Unfortunately, it's very difficult to fully understand either of these without understanding the other, but we can learn a great deal about each to start. To understand the actual inner-workings of an ADC or DAC system, we can start with the basic ideas and then learn how they are implemented. So first, let's examine what it is that's different between an analog and a digital signal.

An example of an analog signal
An analog signal is what we're most likely to experience in the physical world. An incandescent light bulb can be put on a circuit where by rotating a knob the brightness can be adjusted. The oven in your kitchen can adjust its internal temperature to any value between about 170 and 500 deg F. The music you listen to pulls the membrane of a speaker in and out to create pressure waves that our ears interpret as sound. For the most part, the universe around us is analog: any measurement can take any value within a continuous range. As a visual example, consider the sine wave. No matter how closely you look at this function, it's always smooth--each of the infinite number of values between -1 and 1 is found in the curve.

While there are digital equivalents in the universe, for the most part our common encounters with digital signals reside in the realm of computers and electronics. The key difference is that measurements can only take discrete values; it's like saying the value can be 1, 2, or 3, but not 1.3 or 2.14. Consider the discrete version of the sine wave shown here.
A digital representation of the analog signal

The red lines represent the digital values that approximate the sine curve (shown in cyan for reference). While this example may not look so great, picture what would happen if we can have more than the 9 possible values available in this example---you can probably imagine that more values in the discretization would give us a better approximation. The simplest ADC peripheral in the MSP430 is a 10 bit system, which gives us 2**10 (or 1024) values. The sine approximation looks quite good with this set:

10-bit digital approximation of the analog signal

Obviously more bits gives a better representation, but it does come at a cost, in both complexity and speed. The reasons for that are more apparent when we understand how an ADC comes up with its values, so let's take a look at the ADC itself.

In reality, we've already looked at elementary ADC by using the comparator. When we convert an analog value (in this case a voltage) to a digital value (a number), the result tells us something about the magnitude of the analog signal. For the comparator, a particular voltage threshold is set for the analog value, and the number is either 0 or 1, depending on if the signal is greater than or less than the threshold. This makes a 1-bit ADC; the number is expressed in a single bit.

While very useful for more applications than you might expect from a single bit, the comparator is limited in how  much it can say about the analog signal coming in. If the signal changes about the threshold, we see the bit change between 0 and 1. But if the signal is changing above or below the threshold without crossing it, we have no way of seeing that occur.

The simple-minded solution would be to add another comparator with a different threshold. If we set the threshold on the first comparator to 1 V, and the second to 2 V, with both returning 1 when Vin is greater than the threshold, we would have the following possible results:

  • 0,0:  both comparators are below threshold, so the voltage must be less than 1 V.
  • 0,1:  the first comparator is above threshold, but the second is below, so the voltage must be greater than 1 V, but less than 2 V.
  • 1,1:  both comparators are above threshold, so the voltage must be greater than 2 V.
This setup would work, but notice that it's not completely efficient. Particularly, it is impossible to get a result of 1,0, because the voltage cannot be greater than 2 V but less than 1 V. So adding comparators can give you a broader view of the analog signal, but it's not the best way to do it.

The MSP430 ADC10 peripheral uses a Successive-Approximation-Register ADC, or SAR ADC. This fancy name is simply a description of how the ADC makes its measurement. Have you ever played the number guessing game? Say I ask you to guess the number I'm thinking of between 1 and 100, and I'll tell you if you're low or high. What's the most efficient way of getting there? If you guess 90, and I say you're low, then you've narrowed it down to 10 possible values in just one guess! On the other hand, if I say you're high, then you've only eliminated 10 possible values. When you consider that compromise, it's clear the best starting guess would be 50-- you're guaranteed to eliminate half of the possibilities. What next? Well, it would make sense to cut the remaining possibilities in half-- if 50 was low, guess 75; if high, guess 25. If you continue on with this algorithm, it won't take you too long to come up with the number I'm thinking of. (It will take you at most 7 guesses.)

SAR works in the same way; note that in any binary representation, a 1 followed by zeros is half the total possible range in the same number of bits. (Eg. 0b1000 is 8, while the upper limit 0b1111 is 15.) If we take a single comparator and use half of our reference voltage, we get our first bit-- if it's above 1/2 Vref, set it to 1; if below, set it to 0. Then we set the comparator reference to either 1/4 or 3/4 Vref (depending on the value we just got), and compare again to get the next bit. Using this method, you can come up with the digital value in as many measurements as you have bits-- in the case of ADC10, it takes 10 measurements.

Note that instead of adding another fixed comparator, we use only one as opposed to the 1,024 we would have needed to get the same resolution. The compromise is that now we have to change the comparison voltage and make multiple measurements. These two factors lead to a limit on how quickly we can make a measurement; to get very fast measurements, we need fast settling times on both the reference divider and the recording to the data register (which depends on the number and frequency of clock cycles in the CPU, of course). The ADC10 is rated to measurements up to about 200,000 samples per second (or 200 ksps, in the nomenclature used for the datasheets).

The actual mechanism used to make the measurement is pretty simple; you use a DAC of some sort to set the reference according to the bits we've already set. The details are fascinating, but beyond the scope of this particular tutorial. Feel free to read in the Family User's Guide or in a copy of MSP430 Microntroller Basics to get more information. Search online as well. That concludes our introduction to ADC. This is only a basic introduction, of course, and the ADC10 has a wealth of powerful operating modes. Next time we'll look at how to configure the ADC10 peripheral.

13 September 2011

Tutorial 14b: Adding a New Library

Before I start this tutorial, let me add a caveat: I have a feeling this is not the best way to build a library in CCS. It is, however, the only way I could get it to work reliably short of copying the code into every project I use it in. If anyone has some experience with this in CCS, please send a comment and let me know!

We have code that will let us easily send text to the LCM, which would be very useful to have in a library that can be called up as needed, without having to rewrite (or copy-paste) the code every time. The C language makes doing this fairly easy, and so we'll look now at moving the LCM code into a library and go through how to configure a project in CCS to use the library. You should be able to add any code you'd like to reuse to this library and be able to call it up whenever needed.

First: choose a location to keep your library. It's not important where this library resides (from the compiler's point of view), but it's best to have it somewhere easy to get to when you add/change code in your library. At the same time, it should be somewhere safe, where it won't be accidentally deleted, moved, or changed in any unintentional way. I chose to create a folder in my workspace directory called 'library'.

Second: copy any #include, #define, function prototypes and global variables into a new header file. For this library, I've called it simple_LCM.h. If you're going to use definitions specific to the MSP430, you will need to include the MSP430 header as well. To keep your library general, rather than including the header file for a specific device, just #include <msp430.h>.

Third: copy the remaining code (the encapsulated functions) into a new .c file with the same name (ie. simple_LCM.c in this case). At the top of the file, you should add #include <filename.h> (replacing filename with the name of your library file). Note that this file should not have a main function in it.

Fourth: in your new project, right click the project folder and select new → folder. Click the [Advanced >>] button, and select "Link to folder in the filesystem". You can then browse to your library folder and finish adding the folder.

Any files in your library directory are now available for use in your code; the compiler, however, needs to be aware of the path to this folder to find it. (This is the part I don't like; this has to be done for every project, and I'm unable to find a way to make this path be a default in CCS for every new project.)

Fifth: right click your project folder and select properties. Open the C/C++ Build window, and in the Tool Settings, look for MSP430 Compiler → Include Options as well as MSP430 Linker → File Search Path. Both of these need to have your library folder added to the list in order to compile your code.

One shortcut I've found: In CCS, go to the menu Window → Preferences, then navigate to General → Workspace → Linked Resources. Here you can define a path variable (eg. My_Library) that links to your library directory. When you add a new folder to a project, instead of browsing to the folder location, you can click [Variables...] and select it from the list; it's much quicker that way. Unfortunately, I can't seem to get the project properties changes to recognize the path variable, though it seems it's supposed to.

Now we should be ready to build our capacitance meter using the LCM. The code I've written in CMeterLCMG2211.c demonstrates a number of new ideas using the LCM. Browse the code and examine the comments to see how it works. Note the use of MoveCursor(row,col); and the particular commands sent to configure the LCM.

While the simple_LCM library has a routine for printing strings, what happens when we want to print an integer value like the recorded value in the time variable? One intuitive option (at least if you're accustomed to programming in C) would be to use the stdio library and the function sprintf();. All we would need to do is set up a character array such as print_time[10], and use sprintf(print_time, "%d", time); to put the integer into the print_time string and pass it to PrintStr(). Unfortunately, this method has some serious problems for microcontroller use. First of all, even with the heavy streamlining done in CCS to reduce its size, any code using a printf function will be large. In this program, it would exceed the 2 kB of size available in this device. Second, the streamlining makes it difficult to format correctly; ideally, we'd use a %10d format specifier to put time into exactly 10 places to fit the print_time size. We can't do this with the streamlining implemented. We can change the printf assumptions in the project properties, but that makes the function use even more of our severely limited code space.

Fortunately, there are some ways around this problem. For an integer, we can pick off the individual digits by using the mod operator and integer division. x%10; will return the last digit of the number stored in x. x/=10; will remove the last digit and leave up to the second to last. By running a loop over the number until we reach a condition of x == 0 (no more digits), we can pick off each digit to print one by one. The ASCII codes (and the codes for the LCM) are arranged in a way such that the lower nibble corresponds exactly to the digit's value, so 0x30 + 0 is "0", 0x30 + 7 is "7", and so on.

The disadvantage to this loop technique is that the digits are picked off in reverse order--from right to left. The LCM has a mode that allows you to decrement the cursor position when you send characters, however, so it's possible to print from right to left in this way. (In fact, this ability is used in many hand-held calculators.) See the code for the exact code needed to configure the LCM for this mode.

And there's our first complete scientific instrument using the MSP430. We use a combination of the timer and comparator with a calibrated clock to measure the decay time in an RC circuit. The LCM displays the measured time in microseconds. Knowing the value of R and the reported time, we can calculate the actual value of C measured by the meter.

Reader Exercise: This works fine, but wouldn't it be nice to have the LCM display the capacitance rather than the time? You can do floating point operations in the MSP430 (albeit inefficiently), but how would you display a floating point number on the LCM? If sprintf was to big for the program above, it will definitely be too large in this case. Can you come up with a way to display the capacitance without exceeding the 2 kB limit for the G2211 device? If you get stuck, one way is demonstrated in CMeterLCMFull.c. It also has the benefit of being auto-ranging. This code takes up 1934 bytes of space-- just barely enough to squeeze into the G2211!