ARMing my projects
June 4, 2014
June 18, 2014 Don't interrupt me in the library
Interrupts are great. They allow your processor to go about it's business not paying any attention to some particular device until that device needs service. The device "interrupts" the processor, which stops what it is doing and gives the device some attention before going back to whatever it was doing.
The ARM Cortex processors can have up to 32 user interrupts, plus about a half dozen system interrupts. Stored away in the low area of flash memory is an interrupt "vector table." There is a 4 byte entry for every possible interrupt. When an interrupt comes in, the processor stops what it's doing, gets the 4 bytes from the appropriate table entry, and "jumps"to that address. That address should be the address of your interrupt service routine, or ISR, which is code written to service the device in an appropriate manner.
But yesterday I ran into a bit of a dilemma. The LPC1114 has four very powerful timers. Each timer has it's own interrupt. Two ae sixteen bits and two are 32 bits, but otherwise they are identical in operation. However, some don't have all the pins available on the chip and most of the pins that are available have other purposes as well. That is normal. Almost all processors have similar constraints. But the point is that in any given application you may need to use one timer over another for some good reason. I have written a library function that uses a single timer, either 16 or 32 bit, to create the trigger pulse and read the echo from an ultrasonic module. Most of the work is done by a very short ISR (interrupt service routine). You simply set it up and start it, then read the most current value whenever you need it. The timer works in the background reading the module constantly. Because all the timers work exactly the same, I can use the exact same code for any timer. I just give the address of the timer to the function that initializes the module and starts it working. Neat.
But here is the problem. I want to plut this function into my library. Then, wheneve I or someone else wants to use it, you just put a couple of lines into your code like this:
init_ultrasonic(CT16B1, TIMER_CHANNEL_0); // initialize the ultrasonic on 16 bit timer 1, channel 0
get_ultrasonic(); // read the latest value whenever you need it
But for this to work, the address of the ISR has to be placed into the vector table in flash memory when the program is compiled and linked. Since each timer has it's own interrupt, the address of the ISR should be put ONLY into the table location you want to use it with. It should NOT be placed into locations for the other timers, since they will likely be needed for other things. If the vector table was held in RAM it would be simple: just have the init function put the address in the table when it runs. But we can't (easily) do that with flash. The cortex processors have the ability to move the vector table into RAM. But I would prefer not to do that. There are several reasons, but the main one is space. The table is 32 to 40 words (4 bytes per word) so it will take up 128 to 160 bytes of RAM. That is significant on a processor that has only 4K. And when I move this code to the LPC810 with only 1K or RAM, it is simply unacceptable. There are other hacks than can work around the problem, but all I have come up with so far will cause some sort of problems, often making other parts of code harder to use. For now I am not sure how I will handle it. I suspect I will end up using some ugly hack that uses the devil's executive assistant, #define. We shall see.
June 16, 2014 More parts
After a long, busy, stressful day, I finally got around to ordering some more parts from Mouser. I ordered enough LPC chips and MCP1700 3V3 LDO regulators to build all the boards. Got a couple other interesting parts, too. Got some of these: https://secure.cypress.com/?rID=92146&source=shop. These boards are really interesting. The PSOC chips have programmable logic and programmable analog. Kind of like a mini-fpga on chip, so you can define the peripherals you want/need instead of being stuck with what the manufacturer put in. The board is an incredible bargain at $4 each. Even comes with a removable USB to serial converter.
One really neat thing about those chips is that they are ARM Cortex M0, just like the LPC1114! That means the C compiler will compile code for that chip with only some very minor changes to the linker script and startup assembly files. The peripherals, of course, will be different and require a LOT of work. I am considering using verilog or vhdl to configure the digital logic. That should be a neat project, but a long one.
Idid some coding this morning. Have a couple of libraries working now. More code to write, but it is coming along nicely.
une 13, 2014 GCC, ld, ar, make, and more!
In the time since I ordered my PCBs I have been working once more on software. I have enough code written to make a useful library so it it time to package it all up properly. So far, I have just been putting all the .c source and .h header files in one development directory along with .S assembly langugage startup files and .ld linker scripts. That's easy (if not optimum) for development: just tell GCC to compile and link everything in sight in one go and be done. But if I or anyone else wants to actually use this stuff for developing real programs it needs to be packaged up properly into a library, placed in an appropriate directory structure, and separated out from the code it is being used with. Alas, here comes the pain.
GCC (Gnu Compiler Collection) and friends (ld, the Gnu linker and others), implement what is known as "weak symbols." What that means is that you can define a function or variable or whatever with some particular name and declare it as "weak." If you later define another object of the same type that would normally cause a compile / link error. But with weak symbols, the "strong" (strong is the default) symbol will override and replace the weak symbol. This is very handly in a lot of places. A great example is interrupt service routines. The code needs to have interrupt service routines defined for all the interrupts. But in any particular program you may not need them all, and who knows exactly what you will need. So you can have "default" interrupt service routines (ISRs) that do little or nothing as placeholders defined as weak symbols. Later, when you add your actual ISRs they will replace the stub routines in the startup code. This is a great and very useful feature. It is actually used quite a bit. So you write these startup routines, or I/O routines, or whatever that then get compiled into .o object files and placed into libraries for ease of use later when you need them. Inside them are lots of weak symbols that you can override by writing code to replace them.
So you write your change-the-world program for your shiny new LPC1114 using the arm-gcc compiler toolchain. Now the trouble starts. The people who provided the toolchain, with the standard C libraries, have no way of knowing what kind of hardware you will attach to that nifty LPC chip. So they provide the libraries (newlib standard C libarary) with lots of weak symbols for the low-level I/O stuff you will need to perform. Then they tell you to write your functions to perform that I/O and link with their weak symbol library. Except they don't tell you that, unless you read very carefully between the lines. So you figure that part out, dive into the (scant) documentation to find exactly what you have to have, and write that code. You throw it all into one big directory to develop it, modify one of their provided but very poorly documented example makefiles, and type "make." Groovy. It builds, and after a few iterations of trying different things, it actually runs! Wow, success! In case you missed it buried away in the last few sentences, I casaully mentioned "makefiles." What's that?
make is a program that, while relatively simple in concept, over the last 40+ years of existence has grown into a monster. It was designed to make the process of compiling and building programs simpler. It is very powerful. Often when compiling programs things have to be done in a particular order. Also, if some parts are already compiled it is a waste of time to compile them again. make takes an input file, normally named either "makefile" or "Makefile," which has a list of "targets" with dependencies and a "recipe" for how to build that target. The target and dependencies are often file names. You tell make to build a specific target, or the default (the first one listed) if none is given. make looks at the file name of the target. If that file doesn't exist or if it exists but is older than any of the dependency files, it knows it needs to use the given recipe to create or update the target. A very simple makefile might look like this:
myprogram: myprogram.c myprogram.h
gcc -o myprogram myprogram.c
Not too bad. The first line has the target first, myprogram, followed by a colon then the list of files the target depends on. So our example means our program, myprogram, depends on myprogram.c and myprogram.h. Now, anytime you edit one of those files and run make, make knows your program is out of date by comparing the last modified time of the files. If your program is not out of date, make either says nothing or says something like "nothing to do." This is all well and good, but you wouldn't really need a makefile for something that simple. How about something a bit more realistic to see why we really need make?
Say you are building robots. You keep writing code to interface your ebay special HC-SR04 sensors with every one you build. Suddenly it dawns on you that you could write that code once, stash it away in its own file, and use that file as part of your program with the next robot you build. Great idea! Compiling that way will be a little more involved, so we will let make handle it for us.
killerbotprogram: killerbot.c killerbot.h ultrasonic.o ultrasonic.h
gcc -o killerbotprogram killerbot.c ultrasonic.o
ultrasonic.o: ultrasonic.c ultrasonic.h
gcc -c ultrasonic.c
So the first line is a target line. target is killerbotprogram and it has four dependencies: killerbot.c, killerbot.h, ultrasonic.o, and ultrasonic.h. The third one is a little different from the others. Notice the .o on the end. That signifies an object file. An object file is the file a compiler puts out when it has compiled a single file but not a complete program. A linker takes all the needed object files and "links" them together to make a complete program. In our case, ultrasonic.o is the object file that comes from compiling the ultrasonic.c file by itself. Notice the third line that starts with that same file name. The third line is also a target line, and it has our ultrasonic.o file that is listed above as a dependency now listed as a target. When make sees that, it knows it needs to check to make sure it is up to date before using it in our program. The second and fourth lines are the recipes for creating the targets. A note about makefiles: The recipe lines MUST begin with a tab character. Spaces will not work.
It turns out that gcc is not just a compiler. It is also a "driver" program that will run other programs it needs to build a complete program. The main one it drives is ld, the Gnu linker. The linker takes any number of object files that have been compiled and links them together into a program. When you ask gcc to build a program it will compile any source (.c) files you give it, then it takes the object (.o) files created from that along with any object files you passed to it, and hands them all to the linker, ld. The linker combines them all into a program and also looks at "libraries" to find anything else it needs.
Libraries and ar
In programming, a library is usually a group of object files, often related somehow, all grouped together and stored in (typically) a single file. A lot like a .zip file, except they usually aren't compressed. Another name for them is archive, and the traditional Unix (and Linux) program for creating and extracting archives is called "ar." As a side not, the common .tar files found on Linux (and Unix) systems is a type of archive file that was originally intended to be stored on magnetic tape. tar means "tape archive." A common way to create a library is to create a makefile that has the archive name as the first (default) target, and list all the object files you want included as dependencies. The recipe for that target will use ar to combine all the object files into an archive (library.) Each object file listed as a dependency to be included is then listed as a target with it's source code file(s) listed as dependencies and a recipe that's used to create it. make looks first at the archive, then all the dependencies. It updates any dependencies that need updating, then updates the archive if it needs to. And Voila, you now have a library!
Now that you have your shiny new library (archive) with all your cool reusable robot code, how do you use it? make and ld have special features designed for using libraries. you can list a target or dependency in a makefile to be in a library by using the special syntax archivename(filename). ld can look through a list of archive files for functions and variables it needs to build a complete program, and extract the file(s) from the archive. It has a built-in list of libraries it will search, and you can tell it other libraries to look at specifically or give it a directory and it will search all the archives in that directory.
Remember our weak symbols? This is a song, uh story, about weak symbols (apologies to Arlo Guthrie and Alice). When last we saw weak symbols we were singing their praises. Wait a minute. Not so fast. As Paul Harvey would say, here's the rest of the story.
continued June 14
Those weak symbols are pretty nifty. How do you create one? gcc has a ton of features above and beyond the relevant language standards, especially C. One particular feature that gets used a lot, especially in embedded work is attributes. Attributes are a way to tell the compiler (and other tools) some special information about a symbol (function or variable name). Attributes are normally attached to a function declaration like this:
void myfunction( int param1, int param2) __attribute__((attribute_name));
One of the hundreds of attributes available is "weak" which of course causes the compiler to create the function name as a weak symbol. There are a lot of other attributes for various purposes. We will see some more shortly. Some attributes are handled entirely by the compiler. Others, like "weak," pass information along to other tools. In the object file the compiler creates it inserts a special flag with the symbol to indicate it is a weak symbol. The linker, ld, sees that flag and knows what to do with it. So, let's say for instance that you are creating a standard C library to be used on ARM chips that may have any variation of hardware the hardware designer can dream up. The standard C library expects to be able to do input/output, so it has to have functions available. But the people creating the library don't know how to write those functions since they don't know what will be attached. They use weak symbols with "stub" functions that do nothing which then get overriden when the user creates real I/O functions that are strong symbols. Simple and effective. Then the user comes along and writes his/her nifty new robot code, tells the compiler to compile it and the linker to link it with the standard library. The compiler dutifully compiles it and creates another object file which is passed to the linker. The linker looks in the object file for symbols that are referenced but not defines (which is what happens when you "call" a library function) and checks the library for functions that match and links those in with the user's program. Normally, if it finds two defined symbols with the same name it is an error, but not if one is weak. It simply uses the strong one. Then, again being very helpful, the linker takes all the code that is present, but not used, and throws it out so it doesn't take up space in your program for no reason. It does that BEFORE it goes looking for needed functions in the library. Very, very helpful.
But here's the catch. Notice the sequence of events. Compiler compiles your code and takes that object file(s) and any other object files you specify and passes those to the linker. The linker combines (links) those and throws out what isn't used, then goes looking in the libraries for anything else it needs and links those in. It searches the libraries in a specific ordre: first the ones you specifically tell it to search, then the ones on the built-in list. It normally only looks at a library once. This usually works just fine. Any weak symbols you want to override are in the libraries, and your strong symbols will be used in your code so they will override the weak ones that the linker finds later. But what if you want to create a library of your I/O routines for your hardware to be used with the standard libraries?
If you write your low-level driver functions and your high-level program and tell gcc to compile then together, it passes all those object files together to the linker. Since the linker gets all the object files at once, it can find any reference from your code ot the libraries, and more importantly for this discussion, from the libraries to your code. Those weak symbols defined in the standard C library that need to be overriden by your code will be found. But what happens when you compile a library of those I/O functions that the standard library needs to use? Well, your main code gets compiled and passed to the linker. The linker throws out what isn't used. It then looks in the libraries you tell it to and links what it sees it needs from those. That is where your low-level I/O routines are. But it hasn't yet seen the standard libraries that need to call your I/O routines. Since it hasn't seen any reference to them it doesn't link them in. Then it moves on to the standard libraries and links in what it needs from there. That is when it will find references to your low-level I/O. But at that point, it has already scanned that library and thrown out your I/O functions!
And that is where I stand right now. I have some basic low-level I/O routines that use the UART for text I/O. Those routines allow you to write printf("hello world\n"); and have "hello world' show up on your pc connected by a serial connection. But if I put those into a library and tell make and ld to use that library, the standard libraries won't ever find it! I'm sure that buried somewhere in the Gnu documents there is a way to make that happen. I found some promising attributes, especially "used", that I thought would work, but so far it has not. I can't imagine there isn't a solution, but as of now I have to tell make to pass that particular object file by itself rather than as part of a library.
But now I'm not sure I want to find a solution. Perhaps that would be neater and easier in some cases, but what if I want to use some other low-level routines instead? Perhaps I want printf() and friends to write onto an LCD instead of the serial port? So I am not too concerned with finding a resolution, but I will keep looking anyway. In the mean time, I simply put into the makefile that those files should be passed separately rather than as a library.
And the battles go on.
June 10, 2014 Schematic
I thought I should include a schematic of the board I designed.
June 9, 2014 Devil in the Details
With the PC boards ordered I have some time to concentrate on other parts of the build. I ordered some headers, capacitors, connectors, and other miscellaneous parts from Tayda electronics a couple days ago. This morning when I got up I checked my email and had ship notifications from Itead and Tayda in my inbox. Groovy! The boards and the other parts have shipped, within an hour of each other!
Also in the time since the I ordered the boards I have been coding. Up to now I had not used the ADC (analog to digital converter) and had not written any code for it. The ADC on this chip is very powerful. It is a 10 bit converter, with 8 channels (not all available on pins on all chip variants.) It can operate with a clock rate up to 4.5 MHz and has a built in divider to derive that clock from the system clock. It can do continuous conversions automatically, or individual conversions based on several factors including software triggered, timer triggered, or external pin change triggered. The eight channels each have their own data register, holding the latest conversion for that channel. There is also a global data register that holds the most recent conversion from the last channel converted, whatever it may be. Each register also stores status data about the channel. The global register includes a field with the channel number the conversion came from. The control register for the ADC holds a list of channels you want to convert. You select which channels to use in the list and each conversion steps through the list in order, skipping over non-selected channels. That way the chip doesn't waste time on channels you don't need or care about. It takes 11 clocks to do a full 10 bit conversion, but you can use fewer for less resolution, all the way down to 4 clocks for a 3 bit conversion. At 11 clocks per conversion and 4.5 MHz clock it can perform over 400,000 conversions per second. It also has nine selectable interrupts: one for each channel plus one for the global channel. These interrupts are chosen just like the channels, so you can choose some channels to interrupt and some to convert but not interrupt. You can also have the global register interrupt the processor on any conversion complete. All in all, a very powerful analog subsystem.
But with power comes complexity. I spent quite a bit of time deciding how I wanted the library code interface for the ADC to be structured. Once I had the plan I started coding. I made a few changes to the plan as I went, as is typical. Then, finally, yesterday moring I had enough code to try it out. I kept getting all zeros. And no status or channel information. hmmm. I was leaving for a family day with the wife and kids so couldn't investigate then.
I have to add here some notes on the NXP documentation for the LPC chips. The LPC1114 has a data "sheet" of some 400 pages. Even that doesn't include much information on programming the chip. For that, they have another "User's Manual" http://www.nxp.com/documents/user_manual/UM10398.pdf that is an additional 546 pages! To their credit, the users' manual is very detailed: all the information is in there. Very detailed description of all the I/O registers (a lot!). But the information isn't always where I would like it to be. With a chip this complex a lot of parts interact. The information for one part is in it's own section, but the information for an interacting part is in that part's section. You have to look in both (or more) sections to get all the information. Each peripheral section would do well to have a summary section listing all the parts you need to set up to use that peripheral. Oh, well. I shouldn't complain. They do provide all the information, which is a lot more than some companies do!
Anyway, after about a half hour trying to figure out what the problem was, I came across the PDRUNCFG regiester in the SYSCON (system configuration) block of registers. It alone has 37 different registers. The PDRUNCFG register controls power to various parts of the chip. Aha! So I set the appropriate bit in the PDRUNCFG register and tried it. All zeros. hmmmm. I enabled the ADC. I set the bit in PDRUNCFG. I seem to be doing all else properly. What's the problem. RTFM! Oops. The bits in the PDRUNCFG register turn OFF the power! Setting a bit turns the power off. But why didin't it run before I set the bit? Ah, it comes up with some bits set for power saving, including the ADC bit. Change the code to CLEAR the bit. Try again. YES! Now it is reading garbage values from whatever channels I choose.
I still have some codng to do for the ADC, but I have conquered it. I now have most of the peripherals working, but a few (like I2C) remain. It's a long but rewareding journey. I'm having lots of fun. And soon I hope to have some really useful tools.
June 6, 2014 The Good and Bad of the LPC1114
I thought it would be useful to list some of the features and characteristics of the LPC1114 and various ARM chips. The LPC1114 is a member of the ARM Cortex M0 architecture. The LPC810 is an M0+. These all have certain capabilities in common, which may or may not be implemented by a manufacturer. It is a 32 bit ARM core that uses the Thumb 16 bit instructions to save code space. A few highlights of the 1114 in particular:
Maximum 50 MHz operation. Most instructions able to complete in a single clock cycle.
32K flash for program storage and 4K RAM for data
16K of ROM (permanent, non erasable program memory) with a bootloader and other utilities. Cannot be erased or corrupted.
A complex and powerful system of GPIO ports. There are 3 12 bit I/O ports, not all pins of which are available. They can be accessed and updated in numerous ways, with various methods of drive, such as pull up resistors, or pull down resistors.
5 channels of 10 bit Analog to Digital Converter available with automatic or program controlled sequencing through all selected channels.
32 possible "external" interrupts with a nested, vectored interrupt controller with assignable priorities. There are at least 8 pin change interrupts.
An I2C bus controller that can work up to 1 megabit per second
A SPI bus controller with extra capabilities
A UART (serial interface) with special modes including 9 bit (for certain types of network addressing) and RS485 control. It also has auto-baud rate capabilities, with support in ROM. The UART is compatible with the 8250/82450/82550 used in PCs, so if you have ever done low-level programming of those it will be familiar. (My libraries should shield you from most of that.) I addition, the UART has a 16 byte recieve FIFO (First In First Out buffer) that allows receiving up to 16 characters before your code has to read them without losing any.
4 Timers, 2 16 bit and 2 32 bit, nearly identical, with various pwm and other capabilities including 5 pwm channels available. I have set up a single timer to automatically trigger and read an ultrasonic module, causing an interrupt when complete, that only needs a very short interrupt service routine to read the value. This makes the ultrasonic value available at all times by just reading a variable.
An additions "Systick" timer, 24 bits, designed especially for providing a special "heartbeat" interrupt every 1, 10, 100, or some other number of milliseconds.
An internal 12 MHz RC oscillator that is accurate to within 1%. That is accurate enough to reliably use it for serial communication (2% is generally the minimum accuracy needed). Ther is also a Phase Locked Loop to raise (or lower) the clock frequency all the way up to the 50 MHz max. Used with the internal oscillator 48 MHz operation is easily achieved without an external crystal. An external crystal or other external clock source can be used if needed or desired.
3.0 to 3.6 volt operation, with most pins being 5V tolerant. That means it is easy to interface with 5V parts such as low-cost ultrasonic modules without external level shifters. The needed voltage well matches a couple of AA or AAA cells, a single lithium coin cell, and can easily be adapted to other battery types or voltage sources.
A powerful watchdog timer
Available in a breadboard and prototype friendly 28 pin DIP package.
Extremely low price. Under $3 in single quantities.
It isn't perfect. Here are some downsides.
Not widely used in the hobby market, so support and tools are hard to come by.(I'm working on this one)
Most of the pins can only provide about 2ma of current, compared to the 20 or more by many chips such as AVRs. The I2C pins can provide 20 ma.
No EEPROM. If you need to store data with the power off, either the internal flash memory or some external memory type need to be used.
Although the RESET pin can easily be programmed as a GPIO without losing access to programming, the crystal pins are fixed function and cannot be used for any other purpose. On many systems that don't need a crystal, those pins are wasted.
The complex, powerful peripherals make the chip difficult to use if you have to program at the lowest levels. My aim is to make that much simpler with the libraries I am creating.
As you can see, this is a nice and powerful chip. It's a lot of fun exploring it's capabilities.
June 4, 2014
About a year ago, NXP released the LPC810 microcontroller. It really caught my interest because it is a fast (30 Mhz) 32 bit ARM chip in an 8 pin DIP package, and very cheap (about $1.40). There are very few ARM chips in prototype friendly DIP packages, and this one is 8 pins! They were initially rather hard to get: apparently NXP didn't anticipate how popular it would be. I ordered a couple of starter kits from Adafruti and started playing. It's an amazing chip, with lots of powerful peripherals. However, the tools available were, uh, less than stellar. I also wasn't thrilled at having only 8 pins, six of which provide I/O. I soon discovered the big brother: an older, LPC1114 in a 28 pin DIP. WOW! This one is even better! The 810 has 4K of flash and 1K of RAM and runs at 30 MHz max. The 1114 has 32K flash and 4K RAM and runs at 50 MHz max. But, again, the tools available are lacking, for various reasons. I ordered some 1114s from Mouser and began working on creating some tools.
The tools work in ongoing. I am concentrating first on the 1114, later the 810. I can now compile, link, and download to the chip. I have some pretty nice libraries for accessing the on-chip peripherals. Still a lot of work to do, but I will soon be making a pre-alpha version of what I have so far available. I built three basic boards on Radio Shack proto boards, but decided I needed some real pc boards. I have made a lot of pc boards at home, but that is a painful process for through hole components: I can never get my holes drilled just right and connecting the sides is a pain. I decided it was a good time to get some made professionally, and the Chinese are more than accomodating, making them cheaper than I can! So I drew up a quick and dirty board to fit Itead Studios 5x10 cm board. I will get ten of these for $20.49 including shipping. I couldn't make them for that. It is the same basic design as what I put on the protoboards used in the Valdez family of robots. I am not overly happy with the layout, but it should be workable. I created almost all the component footprints and schematic symbols myself, so it will be interesting to see if turns out right. Because of that, I wanted to rush the order as a test of all my pcb tools. Here is the board:
It has the LPC1114, an SN754410 H bridge, a 5V LDO and 3V3 LDO, a reset switch and programming jumper for activating the bootloader, connectors for two DC motors (or one stepper), TTL level serial, I2C, an HC-SR04 ultrasonic sensor, and 9 3 pin servo type headers for servos, adc inputs, or general I/O. There is also a decent sized prototyping area on board to add components.
I should have the boards back by our (US) Fourth of July long weekend, so I can play with them then.
The LPC chips are great. They have now become my main processor of choice. I think the poor availability of tools has hampered their acceptance in the hobby community. I aim to do my part changing that. Others are working on it as well. Ladvien has been working in this area recently (http://letsmakerobots.com/lpc1114-usb-serial-solution-rerolling-boot-uploader). I will be posting more information about the chips themselves, my work with them, and other work by other people. So, if you are interested, stay tuned!