When they need to add temperature control to a project, many hackers reach for a K-type thermocouple for their high-temperature needs, or an integrated temperature-sensing IC when it doesn’t get that hot. The thermocouple relies on very small currents and extremely high gain, and you pretty much need a dedicated IC to read it, which can be expensive. The ICs aren’t as expensive, but they’re basically limited to boiling water. What do you do if you want to control a reflow oven?
There’s a cheaper way that spans a range between Antarctic winter and molten solder, and you’ve probably already got the parts on your shelf. Even if you don’t, it’s only going to run you an extra two cents, assuming that you’ve already got a microcontroller with an ADC in your project. The BOM: a plain-vanilla diode and a resistor.
I’ve been using diodes as temperature sensors in three projects over the last year: one is a coffee roaster that brings the beans up to 220 °C in hot air, another is a reflow hotplate that tops out around 210 °C, and the third is a toner-transfer iron that holds a very stable 130 °C. In all of these cases, I don’t really care about the actual numerical value of the temperature — all that matters is reproducibility — so I never bothered to calibrate anything. I thought I’d do it right for Hackaday, and try to push the humble diode to its limits for science.
What resulted was a PCB fire, test circuits desoldering themselves above 190 °C, temperature probes coming loose, and finally a broken ramekin and 200 °C peanut oil all over my desk. Fun times! On the other hand, I managed to get out enough data to calibrate some diodes, and the results are fantastic. The circuits under test included both best practices and the easiest thing that could possibly work, and the results are pretty close. This is definitely a technique that you want to have under your belt for most temperature ranges. The devil is in the details, of course, so read on!
We all know what the forward voltage drop of a run-of-the-mill silicon diode is, right? 0.6 V or 0.7 V or so, and that’s good enough for a lot of napkin-based calculations. But this voltage drop depends on two main factors: the current that you’re driving through the diode, and the temperature. If you hold the current fixed and read the forward voltage, you’ve got yourself a temperature sensor. While it can vary a little across diodes, figure on a response of -2 mV/°C.
Holding the current “fixed” can be as simple as using a resistor: because the diode’s forward voltage doesn’t change all that much, the current through the resistor is almost constant. All you have to do is settle on a suitable resistor value. The next step up is to build a constant-current supply out of two transistors. I test out both of these methods here.
This isn’t news, though. The next step up in complexity, used by most of the IC temperature-sensing chips, is the “silicon bandgap temperature sensor“. Instead of a diode, two transistors are used, and common-mode imperfections are cancelled out with an op-amp. This works great in an IC, where the two transistors can be nearly identical and at the same temperature, but for DIY purposes, it adds more complexity than it’s worth. Here’s a white paper if you want to dig into the details.
Instead of aiming for accuracy that’s measured in fractions of a degree, however, I’m interested in ballparking how accurate the simplest homemade hacker solutions can be. What do you gain by adding in a real constant-current source? Is it worth it? Let’s find out.
I took seven diodes, six 1N4148s and a 1N4002, and heated them up along with a K-type thermocouple and a pretty good Fluke multimeter that reads it. The 1N4148 is a dead-standard small-signal diode, and it comes in a non-melty glass case: it’s perfect for our purposes. Three of the 1N4148s were supplied current through a 10 kΩ resistor, simply because that’s a good middle value and I wanted to assess the diode-to-diode variability. Two other resistor values were chosen, 3.3 kΩ and 100 kΩ, roughly spanning the reasonable range of currents.
I paired the 1N4002, a high-voltage, high-current rectifier diode, with a 10 kΩ resistor to see what effect a different diode type had. Finally, the last 1N4148 was driven by a constant current circuit, straight out of Art of Electronics, that supplied a pretty solid 50 μA.
An STM32 microcontroller was programmed to take readings from each of the diodes whenever I typed in a temperature. If I had a logging multimeter, this could have been a lot less boring. As it was, I waited for the displayed reference temperature to hit an even five-degree value and typed that in to the STM32, which read out the seven ADCs and printed all of these values over the serial port. So I’d heat all the diodes up, log the data on my laptop as they slowly cool, clean it up once it was done, and graph it.
Sounds easy, right? Well, here’s where it gets fun. The real trick is making sure that all seven diodes and a thermocouple are at the same temperature, ranging from room temperature all the way up to burning-PCB.
Four Fails Equal a Success
Talking with Hackaday’s own Mike Szczys, he asked how you solder diodes when the temperature exceeds solder’s melting point. I naturally replied that I always crimp in those situations (which is true) and then went on to build a diode test rig with soldered connections. Having read somewhere that peanut oil is good up to 210 °C or so, I thought it would make a great immersion medium to even the temperatures out. Well, that it did, because all of the solder joints came undone at the same time at 190 °C: a great test of temperature uniformity. This was also a valuable calibration of my reference temperature probe — the melting point of 60/40 solder is 185 °C – 190 °C. Spot on! But no data is logged when the wires come unsoldered.
When I reflow, I use a hotplate with a (crimped) diode as a temperature sensor, but I’ll often Kapton-tape another diode or two to the PCB in question to get an on-PCB measure. And because the legs of these diodes are dangling in the breeze, I’ve gotten away with solder joints. So I figured that would work here too, so I resoldered the test rig with seven new diodes taped to a piece of blank PCB to equalize the temperature. Everything looked to be going until just below my target temperature of 250 °C, when I smelled smoke. I turned off the heater immediately and started logging temperatures, and sure enough they went down. And then they started to rise again, and there was a lot more smoke.
The epoxy layer of the PCB was on fire! It turns out that it’ll start to smolder at about 240 °C and burns nicely at 260 °C. Instead of going for the fire extinguisher, I opened some windows, kept on logging, and took out a camera. I actually got some data out of the run, but my office still smells a bit.
The next fail involved taping the diodes directly to a solid metal plate. It wouldn’t burn at 250 °C, after all. The problem is that it cools down very slowly unless you remove it from the ceramic heating element, and in the process the diodes and thermocouple wiggled themselves loose from the plate, resulting in an anomalous five-degree drop in temperature across the board. It turns out that this happened to a lesser extent in the firey-PCB run as well. You could work this out with some data massaging in post-production, but I decided to give the hot oil treatment another chance because it solved the contact problem very nicely.
The final run was with peanut oil again, but this time starting from a mellower high temperature of around 200 °C and without fully immersing the solder joints. And it would have worked too, if it weren’t for the ramekin cracking as it hit the slab of marble that I used to cool it down. In retrospect, this was obvious, and it probably would have worked flawlessly if I put in on a silicone hot-pad. With 200 °C oil leaking out all over my desk, and copious amounts of paper towel, I managed to log this run, at least until the oil was all gone. I’ll clean up tomorrow.
The final tally: two useable datasets, one from the burnt-PCB run and one from the cracked-pot oil immersion test. Nothing’s perfect, but it’s enough to draw some conclusions.
My first conclusion is that the simplest method of all, a diode and a 10 kΩ resistor fed into an ADC, is sufficient for non-precise work across any reasonable temperature range. I knew this from using it to reflow solder and roast coffee, but I was still impressed by the good linearity of the diode when compared to the thermocouple reference. You should be using this “trick”.
Next, building a constant current source is probably not worth it unless you really care about nailing the temperature. If simple reproducibility will do, don’t bother. Yes, I got absolutely beautiful results out to 220 °C, but the difference between the best and worst cases is probably one or two degrees across the range. You should spend your time on ensuring good physical contact between the diode and the object that you’re measuring first, and then wire up transistors second.
The 1N4002, a big, beefy diode, performed nearly identically to the 1N4148 when fed with the same resistance, and the three 1N4148s tracked each other perfectly. The only thing that matters is the current. Honestly, I was surprised. This suggests that you could simply buy a bag of diodes and 1% resistors, maybe calibrate a few to verify, and you’re done. Since the slopes of all the diodes are nearly identical, you could even calibrate them with one point at room temperature. How easy is that?
Want to know what’s actually hard in all of this? Testing the circuit outside of the range at which it’ll function.