Archive for Projects

uMesh

I’ve been working on an ESP32 module.

Part of the problem I’ve been seeing with inexpensive IoT dev boards, is that the design around the power system hasn’t been very good. Here’s my attempt to fix that. This is a battery-ready module with a proper lithium battery charge circuit, lithium battery protection circuit, power supply, and antenna, all in a 1 inch by 1 inch package.

The goal is to have a tiny, inexpensive module that can immediately accept a battery and be deployed in the field, along with 30 of its mates.

The battery/power circuitry is surprisingly complex, which is why the built-to-a-price-point applications often don’t have the “proper” battery control, opting instead for “good enough”.

And when I say tiny, I really do mean tiny.

 

The main interface to the world (other that WiFi or Bluetooth) are castellated headers on the left and right side. Those grant access to input voltage, battery voltage, output voltage, TX/RX pins, bootmode selection, and a few GPIO. Because of them, this module can be soldered directly down to a larger host board if necessary, and can even provide regulated 3.3V output to it if given battery power.

What sets this apart in terms of battery handling are a few things:

  • There is a buck-boost power supply to provide a constant 3.3V to the ESP32 through a battery’s entire range (3.0V-4.2V)
  • There is a cut off for battery when it hits 3.0V, to prevent over discharging it
  • When the module is plugged in (through castellations or through the USB connector), it will switch over to using that as a power source. It can be hot-swapped
  • Also while plugged in, there is circuitry for constant-current/constant-voltage charging of the battery
  • The battery will still charge while the device is switched off

The battery just solders on to some pads on the back. Any size of single-cell will do, although the programmable charge speed relates to a resistor value that is soldered at manufacture time.

 

The USB port is for power/charging only, and has unconnected data pins. I also somewhat expect this microUSB port to shear off at some point, as they have kind of a history of doing that.

For the microcontroller, I’m using an ESP32-PICO-D4, driving a metal stamped antenna for 2.4GHz through a pi filter and 50 ohm impedance matched traces.

I haven’t really considered applications just yet, but it certainly does fill a niche for most IoT projects, given that a battery is usually necessary.

While waiting for shipping, and personal time to build it up by hand (mostly the latter though, Oshpark is awesome), I wrote an assembly and bring-up manual. It’s currently clocking in at 17 pages, but that includes a lot of reference. I’ve uploaded it as PDF here. That includes full schematics, part positioning information, net list, and BoM.

Here’s the Oshpark link to the project where it can be ordered (or gerbers downloaded too). It is a 4-layer board, and costs $10 for three.

Soon, I’ll write about programming it in an extremely sketchy way, programming it with the programming host board I designed, designing and tuning the antenna, and how to design it into a larger project.

LightBeam

In the deep, dark, depths of my project “to-do” list, I’ve always had a persistence-of-vision bicycle wheel light penciled in. I felt capable of doing it many years ago, and indeed, documenting the wiring was one of the driving forces for starting this website, but never got around to building one.

Eventually, products like the MonkeyLight came out, which did everything I wanted and more. Because “having the finished product” is rarely the goal of my projects, the very existence of a commercially available solution is sometimes enough to stop me from bothering.
For reference, the good version of MonkeyLight is $1000. Highway robbery! It is a great looking system, though.

But in August of 2017, I found a 32-LED bicycle spoke light on AliExpress. The cost at the time was $4.30.

Even now, it’s less than five dollars! That’s ridiculous! I can’t get individual components for that.
The only problem is that it displays preset patterns, and nothing more.

So I’m going to be upcycling! Get it?

And so I bought two.

 

Here are a few terrible pictures of what they look like in action:

 

There’s the top and bottom. Fairly straightforward. Light-dependent resistor and vibration sensor feeding into a microcontroller.

It’s super hard to photograph white soldermask in a way that displays the traces. A combination of that and buzzing pins out results in a schematic, like so:

U2 is probably (similar to) an AT24C02, an EEPROM chip, which isn’t populated in this PCB. Cost reduction on display! There’s an associated pull-up resistor and connector, also missing. Presumably there’s some model out there where you can plug in an I2C device and talk to the microcontroller, or add data to the flash chip.

 

The other one is probably an EM78P153B, based on its pinout. Or very similar. Depending on the exact model, it’s either one-time-programmable memory, or masked ROM from the factory. Either way, there’s nothing I can do to its code. So, removal it is!

It’s interesting to see microcontrollers that rarely come out of China like this.

 

I have a pile of PIC18F14K50, so I threw the footprint onto a small board and headers that matched the 1.27mm pitch of the original microcontroller. I was using Upverter for this PCB, and it provides a pretty neat histogram:

 

After that, gotta see if it fits, using the convenient laser cutter on hand.

 

 

It most certainly does not! So the microcontroller was swapped with a smaller one, the PIC16F1619.

 

And it totally fits, so off to OSHPark it goes. You’ll notice how neatly the traces were able to be routed on this one. The PIC16F1619 is one of the newer PICs, and they all have something called Peripheral Port Select. Basically, any peripheral can be reconfigured to any pin. It’s pretty fantastic for simplifying board layout.

 

 

Receive boards, solder, inspect, looks good!

 

And then at the same time, remove old microcontroller…

 

…add new flash chip and headers…

And then solder the whole thing down.

 

 

Feel free to fork/download/whatever the hardware files from Upverter, here. I’d be pretty stoked if someone else made some of these, too!

Another hardware update and software is coming in another post very soon. It’s a bit of a tangled web of interrupts, so I’d like to document it properly.

This PCB mostly worked, but the (optional) Hall effect sensor was bad and I changed it. And, final pictures of the result! That’s pretty important.

Infin1D, Rev 0.5

Before YouTube existed, I saw a video online. There was no spoken dialog, just subtitles in Japanese, with upbeat music in the background. It shows pictures of point-to-point soldering of a document scanner’s sensor and some sort of microcontroller. Then the single line sensor is put into a box (with a lense) and taken for an adventure.

I’d love to find this video again, but the main takeaways were that by reducing your camera to one dimension, you can fake the second dimension to create infinitely long images.

The creator of that original video set it up beside some train tracks, and got a long image of an entire train as it drove past the camera. Similarly, you could put it beside a road and get images of cars, or in the playa at Burning Man and capture the eccentric costumes as Burners ride past on bikes.

 

I wanted to build a quick test. Lurking in my parts bins, I have a single-line camera module, and a massive pile of old LCDs. For this, I selected an Arduino shield labelled MCUFRIEND 3.6″.

Searching around reveals that this model doesn’t actually seem to exist, but okay! The standard method of most Arduino LCD shield driving is to find example libraries that look like they have a reasonable chance of working, and then try them all until something does. I had to borrow an Arduino, because I’m not a huge fan, so I don’t even own one. How embarrassing.

In this case, the official-looking repository by prenticedavid is here, and GLUE_Demo_400x240 seemed to do a good job of driving this one. The code initialises it as 0x9327, which is the ILI9327 LCD driver. The datasheet for the ILI9327 is very good, so I may keep the LCD and write my own drivers for the new architecture when I dump Arduino.

I did attempt to follow the maze of #defines in the Arduino library code, but I don’t recommend anyone do that unless they value their sanity. I also put a logic analyser on it to grab just the output; this resulted similarly convoluted and unhelpful results.

 

 

While I’ll eventually be using a sensor designed for document scanners (lots more on that, stay tuned), the TSL1401 is a bespoke solution good for this test.

 

All-in-one sensor, lens, and brains that take logic level timing, and spits out analog values. 127 pixels tall, greyscale, and tiny, it’s objectively kind of bad, but fits the bill for now. I got it up and running in an hour or two.

 

Cool! That means I have a reference implementation. I’m not running this on an Arduino in the final version, but it got the job done in a quick-and-dirty way.

The whole thing was modeled up in SolidWorks, along with dimensionally accurate stand-ins for the electronics.

 

There are two 3D printer parts, adapted from the models of earlier jigs I’ve done, in keeping with the ethos of this test: quick and dirty.

I typically design around M3 screws, and used brass heat-set inserts to turn 3D printed cavities into threaded holes. As much as I dislike 3D printing, it has its uses.

 

Both halves of the 3D printed enclosure failed while doing some of the top few layers, but the baseplates are there so nothing that can’t be fixed with some standoffs and hot glue.

 

The results from this test project, predictably, are pretty bad.

This is a handheld scan across my keyboard.

 

As a stepping off point, it’s served its purpose. Here is an incomplete list of changes that future prototypes will feature:

  • A better microcontroller. Probably an STM32F1 or STM32 F4 series. I like the flexible memory controller, which can be used to map virtual memory regions to LCD driver chips.
  • A better sensor. The current one is only 128 pixels tall, greyscale, poor quality, and relatively expensive for what you get.
  • UX – rotary encoders, buttons, LCD widgets:
    • Start/stop capture
    • Speed up/slow down scan/capture rate
    • Integration time adjustment
    • Post capture stretch/compress
    • Histogram
  • Saving to SD card
  • Battery power

Changing the sensor might not actually make it to the next rev, it’s a big project on its own. Sourcing a sensor through Taobao probably, figuring out how it works, driving it, building a board to work with it, and all the mechanical work of lens selection, mounting, and measuring.

Either way, I’m happy to leave this for now, and return to it when I have cleared out some of my project backlog.

Sure, we’re brute-forcing, but what’s the rush?

Ironically, in the process of writing a really fast implementation of the WPA2 encryption scheme, it was necessary (or at least easier) to first write a really slow implementation.

Here is my wpa2slow Python module.

To be fair, the actual speed at which this hashes isn’t the point. The point is that I have an entirely native Python implementation of how the entire WPA2 algorithm works, with an emphasis on code being clean, easy to read, and forgoing any speed optimisations that hinder legibility. Judging from the quantity of misinformation I’ve seen while researching this topic, someone will find this project useful. A lot of people have been having trouble trying to fit all the pieces together, over the years.

At first, I included the pieces as mock objects for unit testing. Each of the pieces (SHA1, HMAC, PBKDF2, etc) were built up over a period of several months, vaguely mimicking the interface format of equivalent existing hashlib libraries. Unfortunately, all of the standard Python hash libraries are really inconsistent!

For example, to hash a string with SHA1, I’d use:

output = hashlib.sha1(str)

Makes sense. But to then hash it with HMAC-SHA1, I’d do:

output = hmac.new(secret, value, hashlib.sha1)

Which is pretty weird, in this context. It makes more sense when you realise that HMAC can work with multiple sub-algorithms (commonly SHA1 or MD5), and you can go multiple rounds with adding more salt. That’s not the case in my implementation, however, so copying the format was a mistake.

One thing that rests solely on my shoulders is the massive amount of debugging information, hacky fixes, and generally poor code I’d written while focusing on an entirely different problem.

If I had written it cleanly from the beginning, it would have been a lot less work, that was silly of me.

 

Anyway. It’s pretty decent now. Check it out, read the docs, or install it!

pip install wpa2slow

ReacTick R1.5

Prototyping aw yeah.

 

After the first version of my LCD widgets, there were some things that I was planning on changing for the next respin. R1 just to get some hardware in my hands and start writing code. This next one is all about the cleanup.

 

The last version of R1 is here on GitHub. It works… Poorly. The code in the test folder is stuff to help me prototype, before I got to a working model. Originally, I tried writing an LCD driver in Python for the Raspberry Pi GPIO, but the output seemed super unreliable. Instead of taking the time to troubleshoot why, it was faster to port that code onto a PIC dev board, which worked great.

Once I had a known-good LCD (using the shift register), I finished soldering up the rest of the board and tried to bring up the whole board. It never quite worked properly, and here’s why!

When I was first drawing my schematic, the most common ESP8266 module was the ESP-12. Right around that time, November of 2015, the ESP-12E had just come out, with a bunch of extra pins. No one knew much about this at the time, but the conventional wisdom suggested that it was safe to include them in my design, allowing me to get away without using an additional microcontroller. Turns out these extra pins are associated with the flash memory, and using them willy nilly causes strange reset issues. Guess what kept happening when I was trying to test my fully populated board?

So before a complete rethink was in the cards, here was the original plan for revision 2:

  • The LCD has two little mounting tabs on the sides. An appropriately sized via to accommodate them will make the LCD fit better and prevent wobble
  • One of those mounting holes will interfere with the switches. Move them somewhere else. Specifically on one of the short edges, because the current location causes damage to the LCD when the buttons are pressed while the screen is face down on a table
  • Silkscreen for momentary switches to include functions (RST / PROG)
  • Add in the clever circuit that the NodeMCU group uses to enable button-less programming
  • Remove all unnecessary resistors:
    • R7 connected to LCD_RST
    • R10 connected to LCD_RS
  • Change LEDs to 0805 or something similar. I’m using 3528s, and they are huge and super bright and look out-of-place
  • Change user-settable LED to connect to a pin that is not GPIO16. Apparently some manufacturers (not mine) connect it internally to the ESP8266 reset pin
  • Change all the SIPO shift register pins from bit-banging to use the ESP serial ports – Should be faster, and allows me to…
  • Change requirements from an ESP-12E module to just and ESP-12, which has fewer pins
  • Break out extra ESP pins to some unpopulated pads for future hacking
  • Break out LCD touchscreen pins for future hacking
  • Add more testpoints for debugging (and future hacking)
  • Big decoupling cap

 

Some of these ended up making it in, but the new system design deprecated other points. So what’s new in R2? Stay tuned to find out!

R2 Paper

Unpacking WPA2

As discussed in a previous article, WPA2 encryption is comprised of three different algorithms layered on top of each other. I went over them in a very brief overview, so here is a more in-depth discussion on how I optimised and implemented them on an FPGA.

Also linked in that previous article, it’s worth going back and reading this presentation again. It really is a great overview of each of the three algorithms and how they fit together, without muddying the waters with the low-level details.

Disclaimer: This is my process for understanding and breaking them down. I will be using non-standard terminology, and there are certainly other ways of internalising these algorithms. My methods aren’t the only methods.

The lowest building block, SHA1, has four distinct sections stages of note: Load, Process Load, Process Buffer, and Output.

In a completely linear implementation, it would look like this:

SHA1 Linear

I’m assuming a bus width of 32 bits. The input/output blocks on the end are fixed in size, and cannot overlap. Not if we want to maintain half-duplex compatibility with the parent device. That may or may not be important, but it sure does simplify the implementation for now. 165 clock cycles total.

We have a little more wiggle room with the red blocks. Because the buffer can be processed as it is getting populated by the Process Load stage, we can merge them. Extrapolated, and using the input/output as the bottleneck, we come up with this:

SHA1 Parallel (2)

 

Notice that there’s still only one green section running at any single point along the horizontal. That’s the critical path, and dictates how many parallel operations we want to run at once.

It works out to 106 clock cycles for one cycle, assuming we run 5 hashes in parallel. So, 22 clocks per hash. Way better.

hmacpseudo

The next block up, the HMAC function, contains two SHA1 functions, which is sort of annoying. The second/outer call is also dependent on the output of the first/inner one, so I can’t parallelise that at all. To fill up the 5 SHA1 operations I have going at a time, I must also be running 5 separate HMAC operations.

There is one optimisation to be done, though, mentioned in the presentation. It’s not obvious how it’s useful in a concurrent environment, but bear with me.

Part of the algorithm requires two buffers to contain the ‘secret’ portion of the HMAC input, padded with 0x36 or 0x5c up to 64 bytes. The parent algorithm, PBKDF2, iterates two HMAC functions, 4092 times, all with the same secret variable. That can be calculated once and used for the entire loop.

If everything is done in parallel, then one would expect that the calculation could be done on-the-fly with little to no penalty. When you consider that each round of SHA1 is 64 bytes, however, you realise that a padded 64-byte input message, followed by the ‘secret’ variable requires two complete rounds of the SHA1 function to come up with the final result. But the first round of SHA1 results in the same output, every time, and can be precalculated. This turns the HMAC algo from what is effectively four SHA1 operations into only two.

 

pbkdf2pseudo

 

Note that they’ve actually written this algorithm incorrectly, there is an additional XOR to combine each stage of x1/x2 into a final result, but no matter for this discussion.

The PBKDF2 part is by far the most expensive step, because the area of most FPGAs will be too small to unroll 8192 copies of the SHA1 algorithm (which requires minimum (80 words * 4 bytes * 8 bits = 2560) bits of buffer space). One more optimization could be done, given the right conditions, and it’s a doozy:

The final, very last operation in the whole mess is to append x2 to x1. No hashing after that. That means the final output is two groups of 5 words (40 bytes total). This is the Pairwise Master Key. It is statistically unlikely that the first 20 bytes will be correct, but the second group would be wrong. This means that if we have the PMK and only need to verify it, we can do these calculations with exactly half the silicon area, or twice the speed.

Unfortunately, in this particular use-case we don’t have it, but it’s a great trick to keep up our sleeve.

 

 

So what do we have in this use-case?

 

We’ve gone from the wifi SSID and passphrase, known as the master key (MK), and from that, generated the Pre-Shared Key (PSK), which is known as the Pairwise Master Key in this particular implementation of the PBKDF2 algorithm.

To verify the PMK against the passphrase, something called the “Pairwise Key Expansion” is calculated. From the captured WPA2 packets, we have a few variables: client MAC address, AP MAC address, client nonce, AP nonce, and the Message Integrity Check (MIC), packet body.

The first four variables, along with the PMK get combined together in kind of an annoying way, and then compared against the MIC.

They just call it the pseudo-random function (PRF), and it goes kinda like this:

a = "Pairwise key expansion";
b = min(APMac, CMac) . max(APMac, CMac) . \
    min(APNonce, CNonce) . max(APNonce, CNonce);
r = "";
for(i = 0; i < 4; i++) {
    r = r . HMAC_SHA1(PMK, a . "\0" . b . chr(i));
}
return r[0:64];

 

Yeah, the actual string is used in the PRF.

This gives us the Pairwise Temporal Key (PTK), which is then combined with part of the (encrypted) body of the packet we captured:

 

mic = HMAC_SHA1(ptk[0:16], data[60:121]);

 

And then compare this to the MIC we’ve already captured! Easy, right?

No! It’s a huge pain and would add another HMAC_SHA1 block that we don’t really have room for. This will probably be implemented on the microcontroller firmware or the host software of my system.

Reconfigurable CNC platform

I’ve got a big thing about building stuff in a modular way.

 

So I installed Fusion 360 yesterday, and I’m pretty impressed. Fusion 360 is a new SolidWorks competitor by Autodesk. Basically a $6k-8k CAD package, released free for hobbyists or businesses making less than $100,000 a year. That’s quite a hook.

Especially considering how good it is already. It’s not quite at par with SW, but the feel is very similar, and I can see it eventually being a strong contender. Plus, you know, free.

 

The parametric engine is also very good. I’ve been meaning to build a CoreXY platform for a long time. It’s an open source belt-driven CNC platform with balanced forces in the X and Y axis, an excellent build-area / platform-size ratio, and parts that are amenable to laser cutting. There isn’t a specific project it will go with, but you never know when you need to drop a CNC system into something. CNC everything!

The idea behind building it is that I’d like to be able to design and build a platform to conform to whatever my requirements for a specific project within about a day or so. There are a few major variables that could potentially change:

  • Motors
  • Belts / pulleys
  • Precision rod
  • Material thickness

I’ve already purchased belt and pulleys ($10 for the cheapest GT2 belt, $10 for matching pulleys), and ideally, I’d be able to use the rest of those common items that I have kicking around, or can scavenge easily.

So I redesigned the CoreXY platform entirely using variables and formulae. Based on the waterjet cut CoreXY. Here’s a list of the variables that you need for a CNC, apparently:

CoreXY VariablesThat’ll change a little as I continue tweaking.

All other measurements are derived from those. Now when I need a new system, I take stock of the motors, sheet material, and so on, and enter the values into the window. A laser cutter file magically appears on the other end.

Here’s my project. It’s definitely subject to change. Accounting for laser cutter kerf is on the roadmap, and waterjet cutting would be good to design for, too, although I’m not super familiar with the constraints on that.

 

The files are located in the cloud right here. The X carriage is coming in a few days when I’ve got a few minutes. This is also part of an ongoing project documented in this Vancouver Hack Space thread.

 

One thing that I had problems with, is that Fusion 360 is not capable of reading parameters from subassemblies or parent assemblies. I had to copy the same set of variables to all of my parts, which is pretty annoying. Some internetting says that this feature will be implemented Real Soon Now(tm), as of 2014.

 

A good help for that was an add-in called ParameterIO. It doesn’t work out of the box, though, there is a bug because of the dimensionless quantities in linear patterns.

I had to edit line 219 of:
C:\Users\Jarrett\AppData\Roaming\Autodesk\ApplicationPlugins\ParameterIO.bundle\ContentsParameterIO.py

To say this:

unit = ' '
 try:
     unit = _param.unit
 except:
     unit = 'null'
 result = result + _param.name + "," + unit + "," + _param.expression + "," + _param.comment + "\n"
 

(Full file here for easier copying.)

 

That allows you to export all of the user parameters, and also the model parameters as a CSV file. The same bug causes problems when you try and import!

Fortunately you can delete the second half of the CSV file before importing, because the model parameters are a useless thing for unrelated parts.

An add-in to automagically inherent all parameters from parent assemblies would be awesome, and fairly straightforward, from the looks of things.

Maybe this feature already exists and I just haven’t found it, or maybe I will have to work on it when I’m done the X axis of this thing.

I don’t lamp well

 

That was a long “next month”. A recap is in order. This post chronicles the long descent into complete and utter apathy.

 

In September 2014, I made some sketches for a lamp I wanted to build. The intent was to use the glass plate from a desktop scanner as the light diffuser, and laser etch a fractal pattern onto it to create a frosted effect, instead of being optically clear.

Here is a test piece I did, with a laser cutter and an image found off the internet.

 

 

The results were pretty fantastic. Fine details get lost, because it looks like the mode of operation is the laser heating up enough of the glass to chip off a small chunk before moving on. And so on, for the entire image. It creates great looking, even optical diffusion, though.

 

For another test, I decided to design and build a similar, but smaller lamp. Glass scanner beds are a limited supply. Using a glass tile I found at a craft store, I designed an arm to hold it onto a wall, a few centimetres away from a PCB containing some high power LEDs.

 

 

The initial model and 3D print is shown on my previous post.

Here is the final version, with some corrected measurements and better mounting point.

Lamp print

 

And the PCB arrived shortly after the last post.

 

Lamp PCB

Oops!

That’s mistake number one. Everything was intended to be clean and white, but I guess I forgot to change the soldermask from the default DirtyPCBs red. It’s not the end of the world. This is a prototype of a prototype, after all.

 

The first board was populated, and then the lamp languished for a year and a half.

 

 

Recently, I found it buried in a locker and tried plugging it in for the first time. With no prior consultation to documentation, I tried it on a bench power supply, starting at 5v. Nothing happened, so I turned it up to 10. At 15v, the semiconductor on the board released some smoke and glowed red for a few minutes.

Nope!

 

Back to the docs, I read that I had used an adjustable 5v boost converter, so that solved that.

I soldered up another board (I had two spares of the IC), including the DC barrel jack this time, and plugged it in again. Turns out I had the wrong polarity!

No smoke, but some troubleshooting proved that I had definitely fried the chip.

 

This was pretty much the limit of how much I cared, so I did what anyone would do:

I jumped over the active parts of the circuit with a power resistor, and ran the LEDs directly from a 19v laptop power supply.

 

 

Job done!

Next time I’ll build in some more safety factor.

Additionally, looking at the lamp from the side is really really bright because of the bare 1W LEDs. I kinda planned for this and put some slots in the side of the base for some acrylic sheets, but I’m quite done with this design.

Snap-on Desktop Widgets

And now, for my next trick, I’m going to manufacture a million tiny monitor widgets to snap onto your big monitors to monitor your widgets.

 

Still with me?

 

This is a project with many parts, but I will only get in to one phase on this log.

 

I’m hooking up an ESP8266 WiFi module to an inexpensive TFT LCD. The idea is to have an internet-connected, smaller-than-credit-card sized screen that displays one thing, and one thing only.

Some use-cases could be the two-day weather forecast, a slideshow picture-frame, or a graph of the current price of Bitcoin. Things that don’t require more than a couple changes per second, and don’t warrant using up real estate on a main monitor when work needs to get done.

The system design is actually really simple. A mini-USB connector is feeding power to a 3.3v linear regulator, and data to a USB-UART bridge. The bridge is able to program the ESP8266 via USB to a host computer, but that is not required for general operation. A standard cellphone charger plugged into the USB connector for power is fine. The WiFi module connects to an Access Point periodically and grabs a static image from a website.

This image is then fed to the LCD. That’s it. That’s all it does. I’ve gotten the BoM cost down to around $8 each, so it’s reasonable to have a lot of them on a desk, displaying various bits of data.

 

The hardware files, including Gerbers, of revision 1 are here. There are also folders for the firmware, software, and test rig, but as of the time of writing, that is all very much a work in progress.

That link will soon be outdated, but I’ll tag the first revision as a “Release” in GitHub when I’ve got the different parts working.

As for next steps:

The LCD I’ve chosen is one of the cheapest ones I’ve found. It has a parallel data bus for communication, and I’ve used a 74HC595 serial-parallel chip to make it work with the ESP8266 module’s limited IO. I’m using an ESP-12E for reference, but trying to make it work with the original ESP-12 as a bonus.

Schematic

I think I can do some interesting things by replacing both the SIPO and the USB-UART bridge with a microcontroller. Things involving bootloaders, and things involving cross-monitor communication. Cool stuff.

Hysterical

After building my Piccolo and playing around, I really don’t like the software. It’s certainly simple and hackable and does a lot of what I need to do, but something about the Arduino/Processing pair don’t quite work for me.

Fortunately it’s just a bunch of servo motors, and I already have the PIC code in my toolbox.

Using my Arduino-form-factor PIC dev board and my Bus Pirate as a computer-UART bridge, I wrote in a simple protocol to communicate.

The “communication enable” pin will go high, and then two (might change to three) bytes get sent. The first is an “address” or command, and the second (or second two) bytes are a value/argument.

The 0 address is X axis, 1 is Y, and 2 is Z. The Z only uses binary 1 and 0 for arguments.

The z axi has two states: active and inactive. Inactive/up is the parked position, obviously. The active mode has a hysteresis loop attempting to control the position. The feedback is wired up to a window comparator, like so:

On the computer side, I’m controlling the Bus Pirate with a Python script that feeds it one scalar for a given axis at a time.

The speed at which the PIC executes the movement is controlled in the processor, to be tweaked. I will probably add that command in, when it becomes cumbersome.

As more commands become necessary, I’ll be adding more to the processor, I guess. Maybe eventually I’ll implement a rudimentary G-Code.

 

Now for the fun part:

 

Part of my original design goal was to have a 10ns response time. That’s very fast. Most of the old monolithic MOSFETs I’m using (because they’re cheap) have turn-on times in the tens of microseconds.

That totally blows my requirements out of the window, but there are still some optimizations to be made. By using analog circuitry, I can directly control certain parameters and speed them up, compared to hitting a microcontroller and being a slave to a clock source and interrupts getting in the way.

I’ve designed a window comparator. It looks like this:

Window Comparator

If the input goes above the “HIGH” voltage, then the top output turns on. If it goes below the “LOW” level, then the bottom output turns on. There will be two of these circuits. Feeding these outputs to some discrete logic, or something clever that I haven’t thought of yet, I can turn on or off transistors to the different power stages.

EDM Schematic V2

There are three power stages: Stage 1 (rectified input), stage 2(charge), and stage 3 (output).

 

The S2 MOSFETS can be ignored, just treat them as one. I should be able to parallel as many as I need, within reason, it doesn’t change (most of) the math.

 

The input of one window comparator is from VOUT, and the outputs are hooked up to the micro. If the HIGH output is on, then that means the MOSFET Q2 is on, but the EDM electrode has not made contact with the workpiece. Start (or continue) jogging the Z axis down.

If the LOW output is active, then we’ve gone down too low, start jogging up.

The other comparator’s input is at S2VCC. That controls the turn-on and turn-offs of the MOSFETS. If C3 is too low, then Q2 must be shut off and Q1 turned on to charge it up. When it is high, flip that. The idea is that Q1 and Q2 should never be on at the same time, providing a direct path to ground. The logic here will also involve halting the Z jogging, or making it jog up.

 

So there you go. With two different kinds of hysteresis going on at once, there will be some experimentation going on with how they work together. That leads me to one last trick:

You see the HIGH and LOW inputs on the comparator circuit? Those must be an analog voltage. That I’m going to set with a PWM out on the micro and the caps smoothing it out to an analog value. Varying the duty cycle of the PWM will allow me to vary the analog voltage.

 

Counting up my PWM outputs:

Two for each window comparator (4 total)

One for each axis (7 total)

One for the oil pump (8 total)

 

I haven’t discussed that last one yet, but stay tuned!