AF Vehicle blog — September 7, 2009
Home | << Historical | << First | < Prev | Next > | Last >> |
I have now got a rudimentary protocol set up and have run a few tests to make sure that the microcontrollers can actually send and receive data over the bus. I decided to use the term "packet" for the discrete collection of data that each module will put on the bus. I realize again it sounds a bit more ambitious that it is, but I couldn't think of another term. I was calling it a "frame" but there are two reasons why that is not appropriate. First, a frame usually refers to a fixed-length time-slice, and my packets are variable in length. Second, it would cause confusion with the 20mS R/C "frame."
Inter-module communication tests
This test was an attempt to get two microcontrollers to talk to each other (or rather, one to talk to the other) over the bus. One was set up as both the master and transmitting module (i.e. it transmitted the break, module select byte and the packet data. The second acted as a receiver only, detecting the break and decoding the module select byte and data stream. The data was stored in EEPROM for later analysis.
Results
It was necessary to re-do the receiver slightly. The time taken to write a byte to EEPROM (several milliseconds) meant that it couldn't be done in real time, i.e. as the data is received. Instead, I stored it in a buffer in RAM and wrote it out to EEPROM at the end of the test. The test was determined to have completed when a long (500 µS) idle was detected.
There were several problems with timing that were corrected by
adjusting the timing of some of the elements. In particular, a 100 µS
idle was introduced after each byte to allow the receiver to store or
otherwise process the received byte. There were some other adjustments
which I'll document with the protocol once it is more stable.
The first packet was decoded correctly. The second was garbled. The
exact modality of the garbling was not immediately clear as the bit
sequences did not appear to match with some offset. Probably there was
not enough time between the break and the module select byte, throwing
off the timing of that byte and all subsequent bytes. The "terminator"
bit and idle bits were probably being interpreted as data bits and vice
versa, masking the relationship between the intended data bits and the
actual decoded bits.
Further analysis is probably not a good use of time, as in some ways this test is not a fair characterization of what would occur in the end product. The fact that the first packet was received correctly indicates that the thing should work in principle. I therefore propose to frame a test that will more closely match the real-world situation as it should occur in the end product.
XPS Receiver
I also looked at the output of the XPS receiver on the oscilloscope. At the first look my heart sank. All the pulses occur at once! So I was resigning myself to having one controller (or at least one timing circuit) per channel.
But then I started to look more closely. The channels don't in fact all start at once. The times each of them start are offset by a small amount. In fact, the amount is large enough to be resolved by a sufficiently short interrupt routine. However, what I was really interested in was when the pulses end. Since each pulse can vary in width by a large factor (larger than the start offset), then even with the staggered start time, it would be possible for two pulses to end at the same time, or near enough that even a very short interrupt routie would not be able to resolve the difference.
So I hooked up two channels through a couple of resistors and adjusted the sticks until the two channels ended at the same time. Yes, it turns out they can end at the same time. But adjusting them a small amount yields a surprise. The start times change, so that the pulses end at the same time, within a small range. Then, after moving a bit further, the end times suddenly jump so they end at some discrete interval apart (approximately 60 µS). After moving still further (with, again, the start times moving slowly relative to each other and the end times a fixed interval apart), it suddenly jumps again so then end times are twice that interval(approx. 120 µS). So it appears that the end times are quantized in approximately 60 µS steps. Any two pulses either end at exactly the same time, or they end a discrete multiple of 60 µS apart.
Very clever! And also good news for me. It means that, as long as I can make the interrupt routine less than 60 µS long, I should be able to resolve each pulse width with just one controller. The (slightly) bad news is that I don't think I can just combine all the channels with diodes and feed them in on one controller pin as I could have done with a 72 MHz receiver. I will need one pin per channel. However, I believe I will only need four channels, and even my little 8-pin controllers have six I/O pins, so I should be fine. (I will of course need one I/O pin for the bus!)
As an aside, the algorithm that the receiver appears work like this. For each channel, choose a tentative start time to be 60 µS after the previous channel (for the first channel, start at time "zero"). Calculate the end time (by the channel's value). Now move the end time ahead until it is exactly on a 60 µS boundary, adjusting the start time accordingly.
Of course this may not be exactly how it works. The scope is triggered by one of the channels and thus may be delayed/advanced from frame to frame in ways I am not aware of and may just make it appear that the end times are quantized, when in fact something else may be going on. Anyway, it's a really clever way of getting around the resolution problem, which would of course be the same kind of problem when producing the pulses with a single controller as when reading them.
At any rate, it appears that I can time the pulses with one controller, if I use one pin per channel. I will have to keep the interrupt routine very short to make sure it is over within the 60 µS. I can set it so that an interrupt is generated when any of the channel pins change state. I will have a two-byte timer running at the clock speed (1 MHz). The interrupt routine would just read the state of all the input pins (channels) and read the timer and store these two pieces of data in memory for later analysis. Figuring out which channel changed and calculating the channel times would then be done later, once all the pulses were done. Resolution will be 1 µS (much better than your standard analog servo which has an approx. 8 µS deadband).
Anyway, it will be interesting to try and write the code to do this. The question marks I see will be 1) if I can keep the interrupt routine short enough, and 2) if there will be enough memory. For four channels, there will be eight (maximum) events that the interrupt routine will have to store (the start and end of each pulse). Each event will have associated with it the state of all the channels (one byte) and the timer state (two bytes), for 8 x 3 = 24 bytes. There are only 32 bytes of RAM available, so I hope the rest of it can be done with only 8 more bytes!
AF Vehicle blog — September 7, 2009
Home | << Historical | << First | < Prev | Next > | Last >> |