Archive for Jarrett

Very Remote Control

Instead of a TV at home, I use a projector. It’s on my ceiling, with the buttons inaccessible. The remote for it also isn’t really working anymore. Problem.

There’s nothing on the remote control’s PCB except for an obscure microcontroller (TTR013), IR LED and driving transistors, and some carbon contacts for the buttons. The intermittent operation is not the contacts deteriorating, and the microcontroller is getting power, so troubleshooting opportunities are limited.

 

But, the internet is a wonderful place, and someone has just straight up recorded the exact remote control I need and posted it up on Github.

The config file is here, but they also have the raw recordings. Apparently it’s a config file for a Linux IR transmitter driver called LIRC. The config documentation is here.

 

It’s pretty straightforward, so I set out to duplicate the waveform using the RMT peripheral of the ESP32. The RMT peripheral is an arbitrary waveform generator, and one of the typical use-cases is an IR transmitter. Perfect.

My final code is here. The From my reading of the LIRC docs, the relevant config data is this:

 

header 9077 4504
one 602 1622
zero 602 511
ptrail 604
gap 108167

begin codes
ON 0x000CF20D

 

Where a binary 1 is denoted by an on-pulse of 602 microseconds and then an off-pulse of 1622 microseconds. Similarly for 0, it’s 602-on and 511-off.

The code for ON uses that sequence to write out 0x000CF20D.

 

The whole packet starts with the header sequence (9077-on, 4504-off) then 0x000CF20D, then the tail/gap sequence (604-on, 108167-off). Seems straightforward.

 

I put the ESP32 on an oscilloscope to make sure the RMT was doing what I wanted it to do.

 

It seems to be! But this is more of a logic analyser task, so I pulled that out. The waveform looked exactly as I expected, so I tested it out with an IR LED.

 

 

And it didn’t work at all. Nuts.

By connecting the remote control to the logic analyser and powering it with a nice bench power supply, I was sometimes able to get output data. Enough to grab a capture after several minutes of mashing the carbon contacts with a brass standoff.

Here’s the full waveform:

It’s worth noting that the waveform is inverted from the actual LED current, due to measuring at the LED with low-side switching. Don’t worry about it too much, just invert the logic.

Sigrok can export the waveform in a Value Change Dump format, which looked good enough for my purposes. Then I wrote a python script to convert VCD files into the RMT packet format.

It looked good, with the exception of some glitches caused by the RMT not being able to handle very long delays, relative to the fast switching. This strategy still didn’t work on my project.

Speaking of fast switching, let’s investigate that further. Abandoning the Github config files so quickly didn’t sit quite right with me. It was too perfect.

Going back to my capture of the remote, here’s a binary 1:

 

Here’s a 0:

 

And then here’s the header, along with the data portion:

 

And the header with the whole data portion:

 

It does look right. But why was this so different from the Github repo?

Well, obviously this has a carrier wave that I totally blew by.

 

Wow, okay. Back to the LIRC documentation. There is a frequency option that specifies carrier wave, and defaults to 38kHz. So, it’s not in the config, because it’s set by default, and is one line in the documentation. No wonder I missed it.

Honestly, my brutish Python script that specifies all of changing values is probably good enough, but it feels wrong to use a 1500-line lookup table instead of a fixed carrier frequency and 15 lines of actual data. The RMT peripheral made it incredibly easy to fix up.

The end result looks really good, but let’s compare with the captured data again.

Keeping in mind that the data to be sent is 0x000CF20D, I’ve annotated the capture:

 

And it looks mostly good, except… What’s that block at the end?

I can’t find anything in the original config or documentation for the config that would explain that last block. That will remain a mystery for now.

 

Anyway, it was about this time that I got suspicious of the ancient IR LEDs that were in my parts bin. If I cranked the current, they looked visibly blue, which, obviously is the wrong side of the spectrum when I’m looking for IR LEDs. I grabbed a spectrometer and measured it – Yep, that’s not right.

After combing through my parts bin, I found another IR LED and measured it: 815nm. Still not right. I was fortunately able to juuust barely be able to measure the wavelength, though.

Looks like a very broad band 815nm, if the peak stretches all the way to 750nm.

I’m pretty sure I’m looking for a 940nm LED, so some more combing and I found an IR proximity sensor. I don’t have a datasheet, but pointing a TV remote at it triggered the on-board red LED, so I knew it was simpatico.

A quick hack-job later, and I drove the LED directly from the ESP32. My projector turned on, without any further changes to my code. And it successfully turned off the projector, too.

So that’s how I controlled my home entertainment setup with an IR proximity sensor.

 

Sometime in the middle of this journey, I hooked up the LA to a little 315MHz receiver module and triggered my garage door remote. I won’t be posting the waveforms here, but minor modifications to my Python script and code worked out of the box to clone the remote. That whole process took about 15 minutes, so it was a nice and useful diversion. Because it’s attached to an ESP32, I can now trigger my garage door over the internet. From anywhere in the world! Very, very remotely.

 

This whole project is very specific to my needs, but it could be helpful to others. There is now a pipeline for converting LIRC config files into ESP32 RMT outputs. I doubt I’ll ever do this again, but, just for giggles, here’s a GPT-assisted script that will do it:


import re
import argparse
class LIRCConfigParser:
def __init__(self, file_path):
self.file_path = file_path
self.config = {}
self.header = None
self.one = None
self.zero = None
self.ptrail = None
self.gap = None
self._parse_file()
def _parse_file(self):
with open(self.file_path, 'r') as file:
content = file.read()
self._parse_timing_parameters(content)
remote_blocks = re.findall(r'begin remote(.*?)end remote', content, re.DOTALL)
for block in remote_blocks:
remote_name = re.search(r'name\s+(\S+)', block)
if remote_name:
remote_name = remote_name.group(1)
self.config[remote_name] = self._parse_remote_block(block)
def _parse_timing_parameters(self, content):
header = re.search(r'header\s+(\d+)\s+(\d+)', content)
if header:
self.header = (int(header.group(1)), int(header.group(2)))
one = re.search(r'one\s+(\d+)\s+(\d+)', content)
if one:
self.one = (int(one.group(1)), int(one.group(2)))
zero = re.search(r'zero\s+(\d+)\s+(\d+)', content)
if zero:
self.zero = (int(zero.group(1)), int(zero.group(2)))
ptrail = re.search(r'ptrail\s+(\d+)', content)
if ptrail:
self.ptrail = int(ptrail.group(1))
gap = re.search(r'gap\s+(\d+)', content)
if gap:
self.gap = int(gap.group(1))
def _parse_remote_block(self, block):
remote_config = {}
lines = block.splitlines()
key_section = False
for line in lines:
line = line.strip()
if line.startswith('begin codes'):
key_section = True
remote_config['codes'] = {}
elif line.startswith('end codes'):
key_section = False
elif key_section:
parts = line.split()
if len(parts) == 2:
key, value = parts
remote_config['codes'][key] = value
else:
if ' ' in line:
key, value = line.split(None, 1)
remote_config[key] = value
return remote_config
def get_config(self):
return self.config
def get_remote_names(self):
return list(self.config.keys())
def get_remote(self, remote_name):
return self.config.get(remote_name, None)
c_start = '''
#include <stdio.h>
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "driver/rmt.h"
#include "driver/gpio.h"
#include "esp_system.h"
#include "esp_log.h"
#define RMT_IO (GPIO_NUM_18)
#define ON_OFF_CONTROL (GPIO_NUM_19)
#define PIN_INDICATOR (GPIO_NUM_5)
static const char* TAG = "ir_transmitter";
#define RMT_END {{{ 0, 1, 0, 0 }}}
'''
c_end = '''static void send_on()
{
ESP_ERROR_CHECK(rmt_write_items(RMT_CHANNEL_0, on_key, sizeof(on_key) / sizeof(on_key[0]), true));
vTaskDelay(44/portTICK_RATE_MS);
ESP_ERROR_CHECK(rmt_write_items(RMT_CHANNEL_0, tail_key, sizeof(tail_key) / sizeof(tail_key[0]), true));
}
static void send_off()
{
//Off needs to be pressed twice
ESP_ERROR_CHECK(rmt_write_items(RMT_CHANNEL_0, off_key, sizeof(off_key) / sizeof(off_key[0]), true));
vTaskDelay(2000/portTICK_RATE_MS);
ESP_ERROR_CHECK(rmt_write_items(RMT_CHANNEL_0, off_key, sizeof(off_key) / sizeof(off_key[0]), true));
}
void app_main(void)
{
rmt_config_t config = RMT_DEFAULT_CONFIG_TX(RMT_IO, RMT_CHANNEL_0);
// set count to 1us
config.clk_div = 80;
config.tx_config.carrier_en = true;
config.tx_config.carrier_freq_hz = 38000;
config.tx_config.carrier_duty_percent = 35;
ESP_ERROR_CHECK(rmt_config(&config));
ESP_ERROR_CHECK(rmt_driver_install(config.channel, 0, 0));
gpio_set_direction(ON_OFF_CONTROL, GPIO_MODE_INPUT);
gpio_set_pull_mode(ON_OFF_CONTROL, GPIO_PULLUP_ONLY);
gpio_set_direction(PIN_INDICATOR, GPIO_MODE_OUTPUT);
while (1) {
gpio_set_level(PIN_INDICATOR, 1);
if (gpio_get_level(ON_OFF_CONTROL) == 1) {
send_on();
} else {
send_off();
}
vTaskDelay(1);
gpio_set_level(PIN_INDICATOR, 0);
vTaskDelay(40/portTICK_RATE_MS);
}
}
'''
def generate_c_code(config, output_path, timings):
with open(output_path, 'w') as file:
file.write(c_start)
if timings.header and timings.one and timings.zero and timings.ptrail and timings.gap:
file.write(f'#define PACKET_HEADER {{{{ {timings.header[0]}, 1, {timings.header[1]}, 0 }}}}\n')
file.write(f'#define ONE_BIT {{{{ {timings.one[0]}, 1, {timings.one[1]}, 0 }}}}\n')
file.write(f'#define ZERO_BIT {{{{ {timings.zero[0]}, 1, {timings.zero[1]}, 0 }}}}\n')
file.write(f'#define PACKET_TAIL {{{{ {timings.ptrail}, 1, {timings.gap}, 0 }}}}\n\n')
file.write(f'#define PACKET_ZERO_NIBBLE ZERO_BIT,ZERO_BIT,ZERO_BIT,ZERO_BIT\n\n')
for remote_name, remote_config in config.items():
file.write(f"// Remote: {remote_name}\n")
if 'codes' in remote_config:
for key, value in remote_config['codes'].items():
file.write(f"//0x{value}\n")
file.write(f"static const rmt_item32_t {key.lower()}_key[] = {{\n")
file.write(f" PACKET_HEADER,\n")
for nibble in [value[i:i+4] for i in range(2, len(value), 4)]:
file.write(f" //{nibble}\n")
for bit in bin(int(nibble, 16))[2:].zfill(4):
if bit == '1':
file.write(f" ONE_BIT,\n")
else:
file.write(f" ZERO_BIT,\n")
file.write(f" PACKET_TAIL,\n")
file.write(f" RMT_END\n")
file.write(f"}};\n\n")
file.write(c_end)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Parse LIRC config files and generate a minimal C file.')
parser.add_argument('file_path', nargs='?', default='lircd.conf', help='Path to the LIRC config file')
parser.add_argument('–output', '-o', default='main.c', help='Output C file')
args = parser.parse_args()
lirc_parser = LIRCConfigParser(args.file_path)
config = lirc_parser.get_config()
generate_c_code(config, args.output, lirc_parser)
print(f"Generated C code has been written to {args.output}")

Run this with python esp32_convert_lirc_to_rmt.py config.conf --output main.c

It will generate a main.c file that should just compile and have the ESP32 happily spitting out valid data.

Key Storage

I’m on a home improvement kick. When I walking my home, there’s a long, featureless hallway with kitchen at the end of it. I was using a tray on my kitchen counter to toss keys onto when I walked in the door. This takes up valuable kitchen counterspace, and being at the end of the hallway, was not ideal.

 

I designed up a wall-mounted tray that looked kinda nice. Then I CNCed it out of a scrap chunk of pine. I did a bad job estimating the size, so it turned out way too small and not really usable.

 

Instead of just recreating it, but larger, I an additional feature to hold cards would be useful. Additionally, the CNCed wood required some hand sanding that I didn’t really want to deal with again, not to mention the CNC setup time.

None of that was particularly time consuming or difficult, but honestly, it was faster and less tedious to 3D print it and cast it in cement than to break out the CNC again.

With the larger size, I wasn’t so sure about just slotting it into a piece of plywood without supports, so I added a steel cable. It’s a nice design accent regardless.

Note the knotted up steel cable in the mould. I also used XTC-3D, a resin coating designed to smooth out 3D prints. It was expired and a little chunky, but still seemed to work well. I’d use it again. The resulting cement form was smooth and required no post-

processing.

I also designed a tensioning mechanism for the cable. There’s a captive screw that drags a nut up and down. The nut assembly holds the cable end. The whole assembly slides into a 3/8″ drill hole. At the top of the assembly, there is a hole that is sized just right for a screwdriver to come in and engage the screw, dragging the nut assembly closer, tightening the cable tension.

 

For the backing, I made a drill jig to be able to drill straight down the 3/4″ plywood without breaking through either side. The steel cable goes in, and then I attached it to the tensioning mechanism with a combination of melting the plastic to it, and some CA glue for good measure. The whole assembly then also got glued into the drill hole.

After all of that was set up, I used plumber’s putty epoxy to fill in all of the gaps between the tray and the plywood, and bolted it to the wall!

 

This project went just about as smoothly as possible, probably about 6 hours all told.

The Worst RC Car in the World

Quite some time ago now, I wanted to make some of remote control car. And, also, eventually turn it into an autonomous sumobot or similar.

This project was done a few years ago, but I’m writing it up now.

In the interest of getting something into my hands that I can iterate on, I did it as quickly as possible.

Yes, that is a structural q-tip. The cardboard was collapsing.

This crappiness is actually a feature – because I introduced this as a workshop to a bunch of people. Mostly beginners, with minimal exposure to electronics or microcontrollers.

Operation is simple. You program it in with your WiFi credentials, then when you turn it on, it creates a webpage that is just a joystick. That controls the car. It’s surprisingly satisfying.

The repo is here. It includes materials and code.

Everything in this sort of direction in the future will be a little bit more complicated or interesting, I just wanted to break the seal.

The fun in this project is quickly and cheaply building something simple, that can be controlled easily. No part of this is too hard to understand, and therefore, improve.

So with that in mind, I ran a few workshops to get other people started.

As a fun detour, I took the opportunity to play around with a vacuformer.

I modified a VW beetle model to remove the front fenders to better fit the single castor front wheel, then 3D printed the buck.

This is in parallel with designing an internal frame to hold all of the components together. For personal stuff, I usually use OnShape for my CAD, lately. The free version forces designs to be open source / publicly searchable, which is pretty great when banging together AliExpress modules. And also for the base VW model to modify into a tricycle.

Rock Planters Rock

I’ve been wanting to play around with concrete for a while, as a project material. I’m not sure what the end goal will be, but once I’m familiar with the material, I’ll be able to shoehorn it into other projects in unexpected ways.

Concrete (actually cement, as I don’t intend to add aggregate) is different from other casting materials like silicone or resin in the cost/quantity proposition. Silicones are quite expensive, often purchased in 1L volumes or so, while cement has the opposite problem. It’s cheap, but I’m stuck buying 40 pound bags of the stuff.

Initially, I found some geometric molds off the internet and started with that, along with a convenience store drink cup.

It was easy and worked quite well.

 

I used RapidSet cement. The box is blue. There are many different kinds, and apparently this is one of the lightest in colour, while being a little smoother than most.

To make these into decent planters, there are also two additional steps: soak it to get all the lye out, and then seal it so it doesn’t shed cement dust all over.

 

The sealant foams up all strangely. It’s a granite countertop sealer.

 

This was all just practice, so that I would have a feel for how it works before I put a ton of time into something cooler.

I wanted to cast a mountain range (or two!). I really like mountains.

I started with Mount Currie – A distinctive skyline feature near a town called Pemberton. Also as a learning exercise with Blender; I’d never used it before.

Here’s the view from a place I stay at sometimes:

 

I grabbed the geodata with TouchTerrain. Here are my settings:

 

Then in Blender, I set up the camera in the same-ish position and focal length of my camera.

 

Yeah, I’m very satisfied with that.

 

Next step, make the renderer export a depth map instead.

 

Then map it around a cylinder as another displacement map. The nice thing about this is that the Blender portion of the work, which I’m not very comfortable with, is done. So I’ve got it set up just right, and I don’t have to touch it – The rest of the fiddling on this model is with 2D depth maps in Krita, an open source Photoshop facsimile.

I faked the sides of the mountain range a bit, because they don’t just drop off to nothing.

 

When I got it how I liked, I 3D printed it to get a feel for how it looks in the real world. A few back-and-forths with that, and I built a mould around the positive model in Blender.

 

Each piece took about 24 hours to print. This is surplus PLA that will never otherwise get used so the volume of plastic used is totally okay, and the size (and therefore weight) of the cement involved is a concern for a mould that would be more plastic efficient and less beefy.

And then I poured.

The thermal camera shows how hot it gets as the cement starts to kick. It got past 60 by the time I left it for the night.

I actually did it twice. The first one I didn’t mix up enough, so ended up scrambling to mix more, which didn’t fill in the mould completely. Honestly, I kinda like it.

 

For the second attempt, the mould had warped enough that the seam lines were very visible. After the second pour, the mould had warped enough that it wasn’t viable to pour a third time. This is a really interesting datapoint. It’s possible that PETG or some other higher temperature material would fare better. And this issue doesn’t crop up in the smaller castings I did. The cement only gets hot enough to deform the plastic when there’s a large volume curing at once.

 

 

But regardless. The result!

It’s decent. I’m about 80% happy with it. It’s recognisable as the target mountain range, but it’s not instantly identifiable. This is done as the actual aspect ratio of the mountains, with the focal length of the (cellphone) camera I took of the mountains, and it looks a little too shallow. Perhaps the aspect ratio would be a good knob to turn for future experiments, to get the mountains in the casting to look a little taller.

Force-Directed Circuit Board Footprint Autoplacement

From Wikipedia:

Force-directed graph drawing algorithms are a class of algorithms for drawing graphs in an aesthetically-pleasing way. Their purpose is to position the nodes of a graph in two-dimensional or three-dimensional space so that all the edges are of more or less equal length and there are as few crossing edges as possible, by assigning forces among the set of edges and the set of nodes, based on their relative positions, and then using these forces either to simulate the motion of the edges and nodes or to minimize their energy.

Force-directed graphs are a common way to display things like mind-maps. It’s a good way of spreading out the whole collection, while grouping related items together, and minimizing the path length between the closely related items. In my mind, this has a lot of similarities with how PCBs are laid out.

Well, there’s only one way to prove that theory.

KiCad Footprints animated in an exploding fashion

Using KiCad PCB parsing code I wrote for another project, I was quickly able to grab the nets and footprints out of a KiCad project. Displaying the nets and allowing specific ones to be turned off was a feature I identified as critical early on, because the ground or power nets would overwhelm any of the others, rendering the important nets useless.

Truthfully, a significant part of this was wrangling TKInter to build the Python GUI that I wanted. It was a success, but I’ve never used it before, and I am not a fantastic UI designer.

Under the hood, the system essentially treats each of the nets as a spring, and applies a simplified version of Hooke’s Law to each connection. Each of the centre point of the footprints acts as a charged particle, and a simplified version of Coulomb’s Law acts upon it to repulse all of the other footprints. These algorithms are a pretty typical way to do this, by essentially setting up a physics simulation. One tweak on this strategy that is unusual, is that the nets don’t act on the footprint origin. They act on the pad itself, which allows a torque to impart a rotation on the footprint.

I’ve gotten this project as far as “fun tech demo”, and it’ll likely fall dormant for a while in this state. At some point, I will build an appropriate PCB using this technology, because I love making unusual designs that are electrically fine.

The repo is here. Have at it.

Strange New Bedroom Furniture

I bought a used lathe.

 

It’s a Craftex B1979C, which seems to be nearly identical to a Craftex CX704. Similar to many such mini lathes, actually, with varying swing lengths.

The previous owner explained, a little bit sheepishly, that he tried to cut steel a little too hard, and burned out the AC motor. He replaced it with a hobby BLDC motor and ESC, designed for things like quadcopters.

I played around with that system a little bit, and I didn’t like it.

Annoyingly, he bought a $50 motor and a $50 motor controller, both sensorless versions. For these kinds of motors, you need sensors to start up at low speeds with torque on the motor. That would have cost $60 for the motor and $60 for the motor controller – Not much extra for him, but annoying for me to have to shell out the full amount for both, and then have the old ones as somewhat useless spares.

Anyway, instead of going the same route, I chose to get a more sophisticated controller so that I could precisely control parameters, like speed, or acceleration. The options are essentially vESC or ODrive, unless I want to custom build something myself (I do not).

The ODrive, at least on paper, looked a little cooler. Some good communication strategies, lots of control modes, and generic enough to work well for this somewhat strange application. Right after I’d picked mine up, they discontinued support of the V3.6 and went to a closed source model, which rubs me the wrong way. The new version is also more expensive and has a single channel instead of two channels. One of the big issues with the legacy release is that there is a bug in their UART driver – If you send enough malformed packets, like, say, because you have a data line right next to a spinning motor, then the controller eventually stops responding to all UART data. That can be an issue when the packet you want to send is “stop now, immediately”. That definitely hastened my building of a separate e-stop box.

Fortunately, after I wrote a CANBus driver and used that, it’s been reasonably solid. That was just more of a time investment than I wanted to make.

I built a front panel for it as well. It’s a couple simple PCBs holding the switches, OLED, buttons, and a dev board. The advantage of not including everything on a complicated PCB is that I can pop out the dev board to flash it on my bench, before returning it to the lathe. The dev board is an ESP32, which is driving a little OLED screen that shows the target and measured speed, current, forward and backwards directions of the motor controller. The list of features I could add is nigh-infinite, but this is good enough for now. Oh, and I can do software updates over the internet! There is no way this can end badly.

 

It’s weird how sometimes three sentences can encompass four months’ worth of occasional project time.

 

The change gears the lathe came with are steel, which is unusual for a model like this. It came with a set of gears, from A-D: 20/80/20/80. However, this combination isn’t listed on the table anywhere.

The formula is ((A/B)×(C/D))×lead_screw_pitch/initial_ratio
where you have a 3mm lead screw, and it has a 2:1 ratio. So for example, 0.4mm pitch is ((20/50)×(40/60))×3/2 = 0.4.

That means the gears I have are about 0.09mm pitch, which is good as a powered feed, but not intended for any kind of threading.

Most of the other examples of this family of lathes use plastic change gears. Obviously using cast or machined nylon, but with a little bit of babying, I bet 3D printed gears would be Good Enough.

As near as I can figure, the change gears dimensions are module 1, with a 20 degree pressure angle. And a thickness of 8mm.

 

To practice threading, I copied a bolt that I had on hand, which was 1/2″-13 TPI, using the table to get 40/65/60/30.

 

 

Looks neat! But my tool geometry is bad and the angles are all wrong, so I reset, bought some more tooling, and moved on to something actually useful.

 

3D printed change gears work great! I’ve printed off a whole set, and I’m stoked. Arbitrary threads are unlocked.

 

 

I wanted to turn a gear shift knob, which has a thread of M12*1.25mm.

Using the formula above, the gears I need are 50/40/40/60.

The outer diameter of the stock is about 11.85mm.

 

 

This process ended up being incredibly drama free. Twenty minutes after starting, I had a very-slightly too-loose thread, but it snugged right up when I tightened on the knob.

 

 

Success! Now to do some cool things.

Brass Lamp

I haven’t properly built anything with my hands in a while, so it’s time.

 

I mocked up a few lamp ideas, and this one seemed fun. Really, really rough mock-ups.

 

That was neat looking, so I fleshed out the hinge section, which I expected would be the hardest part to figure out and build.

 

 

 

 

First, I 3D printed the hinge, just to get a feel for if my sizes felt right – I am bad at judging size in CAD, things often turn out way larger or way smaller than I expect, and therefore really difficult to build.

 

These looked fine, though.

 

I picked up a ton of brass from a hobby shop. Can you guess what the main focus of the hobby shop is?

 

 

For the hinge pieces, I didn’t have the right shapes or size of brass stock available, so I needed to cut a bunch of smaller pieces and then braze them together with silver solder. And then took them to the belt grinder.

 

Pretty close!

 

And then tried to add one more piece, dropped it while the solder was molten, and had them splatter apart on the ground.

 

 

Okay one more time.

 

 

It’s always better the second time, anyway. I used an M1.5 brass screw as the hinge pin.

 

For the stand, I 3D printed some saw guides so that I could get the tube angles perfect. The stand wouldn’t end up with the right geometry unless the cut angles were just right.

 

Then, after brazing:

 

 

It matches up with my CAD perfectly, I’m stoked.

 

Some blu-tak was used to hold the hinge pieces and light backing together to get a sense of how it was all going to go together.

 

 

The light panel itself was a circle of aluminum that I spraypainted black. Then I riveted the hinge on and laid out my LED tape.

 

 

Wiring.

This could have been done much neater. Next time.

Encircling the LED panel, I bent some flat bar aluminum, riveted it, glued it, and painted it. Then glued on a plastic diffusion circle. This was all done quickly, so I didn’t take many pictures.

 

For the base, some nice dark wood would be ideal, but that would have required a fair amount of material acquisition and hunting down tools I wasn’t up for, for this project. So I took my base, and split it up into easily printable chunks.

I was going to attach them together, sand, patch, sand, and paint them to turn them back into one object, but I kinda liked the jigsaw effect, so I left it. PETG sands really well, and I treated it with a light oil coating to keep it from taking on fingerprints.

The pieces are just bolted together, and mostly hollow to allow mounting of the electronics.

For the controller, I have a surplus prototype lighting controller of dubious origin, and wrote some firmware to handle fading and mixing the two LED channels. The 24V power supply comes from AliExpress.

Similarly, for the knobs, I went for an easy and quick solution using 3D printing for the ends to retain a little bit of the brass tube onto the potentiometers. The left knob controls the light intensity, and the right knob controls the light’s colour temperature. This lamp can fade in between a really cool white and a really warm light.

Running the wires through the brass tube was a fun adventure. What I ended up doing was fill the tube up with oil, run a single wire up through the bottom, then solder it onto the other three wires. After that, I could use the first wire to pull the whole mess back through the tube. Then I flushed the tube with alcohol to clean it all out.

And, final wiring, and it works! It looks great. I can see all the small defects, but that’s okay. I’ll do better on the next one.

Dumping Firmware With a 555

Voltage glitching, also called fault injection, is the process of dumping the energy on a microcontroller’s power rail very briefly – Just enough to cause a glitch, where it skips an instruction, rather than causing a brown-out-induced reset.

I first learned about the technique in connection with the Chip Whisperer, which is an FPGA-based board with all the bells and whistles. It’s quite expensive, and like any other situation where you add an FPGA, very complicated. Naturally, that has elevated it to a level above what mere mortals can perform in the limited free time we have.

After knowing about this, and having it in the back of my mind for several years, I finally have a good use-case that can justify spending some time on this. I have a pile of generic STM8-based devices, and while I have written some of my own alternative firmware, I’d love to be able to dump the original flash to revert them back to factory code.

The function of the device doesn’t matter, that’s not the point of this exercise.

Some quick searching for options leads to this recent write-up, which is sparse on details, but serves as excellent inspiration that this task is very doable.

A few architecture notes:

The STM8 has a Read Out Protection bit in the configuration area of its flash memory. When the programmer attempts to read out the flash memory, the bootloader first checks this bit, and if it’s cleared, it starts reading out the flash to the programmer. If it’s set, it just reads out zeroes. Write capability is never blocked – That is, you can still write a zero to that ROP, and then the microcontroller will be “unlocked”, but it does clear the program memory, too.

One of the pins on STM8s is called VCAP, and it’s attached to the internal voltage regulator. The CPU runs on this voltage rail, not the actual voltage that is provided to the IC’s power pins. The pin is intended to be connected to a decoupling capacitor, and that provides an perfect spot to inject my glitches. Most microcontrollers also have something called Brown-Out Resets: When the input voltage rails sags too low, the peripheral triggers, and resets the microcontroller. Obviously, this is something we want to avoid, and using the VCAP pin should help with that.

In terms of glitching, there are two important considerations:

The glitch must start at the same time or during the cycle in which the CPU is trying to read the ROP bit, and the glitch must not last long enough to trigger the BOR or to make any other important instructions fail. It’s not easy to know these exact timings, so any reasonable values must be tried, essentially brute forcing the process.

Now, the logical way to do this would be to use an external microcontroller to wait for the programmer to reset the system, wait a set period of time, and then trigger the output transistor to glitch the voltage rail. That’s boring! You know what else can do that? That’s right, a pair of 555s.

Here are two 555s set up as monostable pulse generators. The input is the RST line on the STM8 programmer. The first 555 then sets the delay. The second 555 sets the length of the output pulse. Both of these timings are controller by a different potentiometer. These then go to a MOSFET that dumps the energy stored in the internal voltage regulator cap.

After building it with larger-than-designed time values and testing it to prove that the waveform looks as expected, we solve the next hurdle:

To figure out decent ranges for the potentiometers, my STM8 board runs at 8MHz, which means that each clock cycle takes 125ns. The STM8 requires at least 2 clock cycles for each instructions (one for retrieval and one for execution), and more for instructions with multiple bytes or arguments. So, ballparking, we need a pulse that’s anywhere from 0.2us to 1.2us or so.

One problem with a typical 555 is that it can only generate pulses as small as 10us. Fortunately, I have a pair of high speed versions up my sleeve, the LMC555. It has a 10ns minimum pulse instead, which is very zippy. They’re SMD only, so they get popped onto a breakout board to fit the breadboard, and replaced. Some other components got tweaked too, as I played around more.

 

Now on to the programmer. I’m using a standard STLink V2, which speaks the SWIM protocol that allows programming and reading of the STM8’s flash.

With a little bit of bit of Python and stm8flash, we get this:

 

import subprocess
# .\stm8flash.exe -c stlinkv2 -p stm8s105?4 -r out.bin -b 1
out = b'\x00'
while out == b'\x00':
subprocess.run(['stm8flash.exe', '-c', 'stlinkv2', '-p', 'stm8s105?4', '-r', 'out.bin', '-b', '1'])
f=open("out.bin","rb")
out = f.read(1)
print(out)
subprocess.run(['stm8flash.exe', '-c', 'stlinkv2', '-p', 'stm8s105?4', '-r', 'out.bin'])

 

In PC applications, writing text to console is a surprisingly slow process, so a low-effort tweak I made to make the loop run faster is to remove the flash utility’s console logging, just by removing some lines here.

So, all set up, potentiometer on the left controls the delay, pot on the right controls pulse length.

And, bam.

Firmware dumping with a 555.

 

Well, not that fast. It took about 45 minutes of fiddling with the knobs until all its secrets were unlocked. I’d sweep the right knob the whole way, then tweak the left knob very slightly, then sweep the right knob again. It only really worked because the knobs only had to be within the right range for a very brief period of time. It only had to work once.

Would this have been easier with a microcontroller? Oh yes, of course. But that’s not nearly as interesting.

Chromaticity

Here is a brief overview of how light and colour work, in the context of LED lighting.

We’ll mostly be discussing the CIE 1931 colour space, with reference to the chromaticity diagram, shown below.

This is the 1931 version. Newer versions that look slightly different have come out, but the general intent is the same, and they are all used for different calculations. The 1931 version is “good enough” and is universally the one that is referred to when a colour is described in xy coordinates.

The points along the edge are pure visible wavelengths (displayed in nanometres). Anything inside the diagram is some mixture of the wavelengths.

Computer monitors don’t have the same kind of visible range as your eyes, so the colour space diagram above is just a representation, not the actual colours that you would be able to see if it were displayed side-by-side with the actual light wavelength.

There’s a curve along the centre, called the black body curve, that represents “white” light. White can be many different things – Typical incandescent bulbs are around 2500K, towards the right (warm) side of the black body curve. Daylight is very cold, with lots of blue components, towards the left. As long as it’s along that curve, it can be considered “white”. The vertical-ish lines indicate where that particular temperature extends, if a given data point isn’t exactly on the line. Duv, or “delta u,v” is a measurement describing the point’s distance from the line.

In terms of white LEDs, there will typically be a “warm white”, also called “tungsten”, and a “cool white”, called “daylight”. As shown by the black body curve, if you have both LEDs, you can’t interpolate between the two of them and expect the light output to follow the curve. It will be a direct line between the two. There are a couple solutions to this, and an easy cheat is to explore the strategy that one of the industry leaders is utilising.
The Philips Hue tunable white smartbulbs are excellent. I’ve taken apart a handful of smartbulbs, and the Hue has the largest quantity of LEDs I’ve seen(read: brightest), as well as using a few tricks to get better light quality than any other system I’ve seen.

The daylight LEDs, labeled “CW”, and tungsten LEDs, in this case labeled “FW”, (we’ll get back to that) are fairly standard. In the centre is where it gets interesting, with the LEDs labeled “L”. This is lime green.

Here is a closeup of those LEDs plotted on an XY graph, with the black body curve, and an interpolation line between them:

With both of the tungsten and daylight LEDs located on the black body curve, fading in between the two will drag the light significantly away from what we perceive as “white”, essentially making the light look magenta. The lime LEDs, towards the top of the graph, are perfectly situated to pull the light output back onto the curve when the bulb is at an intermediate temperature. Some further investigation into the meaning of the LED designators on the PCB, “FW”, reveals that some LED manufacturers with that particular temperature of LED call it “flame white”. It’s substantially warmer than most bicolour white LED tapes, at around 2200K. Mint LEDs are also sometimes used for the same purpose as lime LEDs. Mint has similar spectral components to lime, but with a little more blue components, moving their xy coordinates a little more towards the bottom-left.
Here they are next to a high quality warm/cool (bicolour) LED tape:

From left to right: my daylight, Philips daylight, my tungsten, and Philips flame. Daylights are fairly similar, within binning and measurement tolerances. My tungsten LED is 2600K, showing how far the Philips flame LED is into the red spectrum. Another strategy for getting temperature that warm is to add an amber LED.

Tungsten, flame, and amber, left to right respectively:

Tungsten LED can be combined with amber to get a warmer (lower) temperature. Amber has an additional purpose: Standard RGB gamuts don’t have a whole lot of range in between the red and the green colours, which can be filled out with the added amber channel. More on this in a bit.

“Gamut” is the displayable area made possible by mixing the available colours. Displayable points drawn with each primary colour, and the available gamut is any point within the area.

Here’s a typical RGB LED tape.

And their spectral distributions:

This green has a slightly wider spectrum than the other primaries, so it doesn’t fit as nicely on the edge of the xy graph. It’s made up of mostly 520 nm and 540 nm components.

The MacAdam ellipse is a graphical representation of what observers see as “one colour”. That is, anything in the ellipses shown appear to be the same colour to a regular person. This particular image is only the results from one participant in the study, so it shouldn’t be taken as definitive, but it does show trends that can provide actionable insights. The shapes are also exaggerated by about ten times for ease of clarity.

The ellipses tend to flow in the direction of the blue-red line, for example, showing only a few different perceptible shades in between those two primary colours. Red to green however, particularly in the red to yellow region, are extremely granular. This is where amber comes in.

 

The amber gives additional granularity in the existing gamut region where our eyes are very sensitive, and in the space where the gap between the RGB peaks is largest.

Emerald green, at 560nm, is another LED that would help fill out the spectrum gaps, providing more range towards the green part of the red-green line. Emerald and amber doesn’t alter the overall gamut significantly, however. There are areas without coverage on all sides of our gamut triangle, but the area to the left of the blue-green line is the most significant. This could be supplemented by InGaN cyan LEDs, at 505nm. That will be the topic of future experiments.

For reference, here is a graph of all of the colours measured and discussed.

 

Watching Plants Grow

 

I’ve been into timelapse photograpy for a long time, often focused on driving the cost down by hacking existing digital cameras.

That follow up post (already written almost two years after the original work) promised project pictures that were never delivered, so here, I went spelunking in some nearly decade-old backups for them.

 

 

Recently, I came across the ESP32-Cam, which basically hits all of the cost/hardware requirements I was aiming for, nearly a decade ago.

It uses an Espressif ESP32, which I’m very familiar/comfortable with, and is about $6. The camera sensor is 2MP, which is small for images, but fine for video.

There are a lot of timelapse projects out there using this board, but none of them really have the features I’m looking for. Espressif even has an official “webcam” codebase that others have packaged up into cool projects. But here are my requirements:

  • A web interface to control capture / timing parameters
  • Serving up images over the web interface to check that those parameters are good
  • Saving images to SD card
  • Nonvolatile storage of the parameters, along with the current snapshot number, in order to gracefully recover from power failure
  • SD card images available to download over the web interface
  • Over-the-air updating
  • Reasonably well-laid out codebase that’s easy to hack on

Essentially, I want to be able to plug this into a hard-to-reach closet and not have to ever physically touch it again. Lots of projects have a subset of these features, but nothing has hit it all, that I have seen.

One of the other ideas I had to improve upon the typical timelapse design was to get the ESP32 periodically grabbing the real-world time from NTP servers. It’s set to run every half an hour, which is probably too often to make a difference that overcome the latency from hitting the NTP server. Regardless, it should keep average timelapse interval perfect over a long period of time.

So I hacked something together. Originally I used PlatformIO with the Arduino framework, because it’s quicker to get a project off the ground than the official Espressif ESP-IDF. The branch is still there, but I abandoned it after running into issues with partitioning, OTA updating, and the additional PSRAM hardware all working together. The main branch uses the ESP-IDF now.

It’s essentially a webserver that controls the camera. You can open up a page and modify settings, including timelapse intervals:

 

You can also get existing parameters:

 

And snap takes a one-off photo to check your setup without touching anything. That made this super easy to tweak with one hand, while holding a laptop with the other.

When a sequence is done, it’s easiest to grab the SD card and plug it into a computer to copy over the files. For longer sequences, maybe to check progress, it is totally possible to grab the images over the web interface. Just slow.

To stitch together all the images, initially I used this command:

ffmpeg -r 30 -start_number 1 -i %06d.jpg -vcodec mpeg4 -vf "transpose=1" -y movie.mp4

That results in a pixelated and honestly pretty crummy output. It’s fast, though.

This got much better results:

ffmpeg -start_number 1 -r:v "480/1" -i %06d.jpg -pix_fmt yuv420p -c:v libx264 -preset slow -vf "transpose=1" -y -f mp4 movie.mp4

ffmpeg is a little inscrutable, so the parameters that require tweaking warrant an explanation. Order matters, too – Some of the parameters affect the input, some affect the output, depending on position.

-r:v is the input frame rate, relative to the base (25 FPS) framerate. So a 250 frame video with -r:v 25/1 would be 10 seconds, while setting it to -r:v 50/1 would result in a 5 second video.

The -vf transpose option rotates the video. The ESP32-Cam’s camera is mounted sideways, but that’s a minor irritation.

To mount the device, tripods are the obvious choice. I created a quick, minimalist enclosure that allows access to the GPIO pins, SD card, reset button, and fits a 1/4″-20 nut that is compatible with most tripods. A Gorillapod, in my case.

 

For the video posted at the top of the page, here is the setup:

 

It was done with a picture taken every 40 seconds, 8 second exposure, low gain and brightness.

ffmpeg -start_number 1 -r:v "120/1" -i %06d.jpg -c:v libx264 -preset slow -vf "transpose=1" -y -t 00:00:08 -f mp4 movie.mp4

 

The -t option sets the end-time, there was a lot of “content plant” at the end that wasn’t moving much.

Future goals might include shoving the whole mess into a closet, along with timed control of the lights and a watering pump, and leaving it for a month.