Calibrating the Sonoff ZNB02 zigbee temperature and humidity sensor

I recently installed a few Sonoff ZNB02 temperature and humidity sensors around the house. Monitoring them for some time, I noticed that the humidity level was higher that what is recommended for a comfortable and healthy indoor environment. Initially this got me worrying about the indoor climate, but as there are no other indications that it is so humid, it made me think that these sensors are perhaps not so accurate. Therefore I decided to calibrate them.

When you mix half a cup of normal kitchen salt with water into a sludge (i.e., a fully saturated solution) and put that in a closed container, that acts as a sort of constant humidity buffer for quite a wide range of temperatures. For a solution of NaCl at room temperature the relative humidity in the container will be 75%.

I placed two of the ZNB02 sensors together with the NaCl sludge in the container, placed that in a room with a constant temperature and waited for about 24 hours for it to settle. My zigbee sensors are connected with a Tasmota zigbee bridge which send the data to a MQTT server, from where I forward it to a ThingSpeak data store and dashboard. You can see that the humidity level is very constant. One of the two sensors is more noisy which also causes it to update more often

One sensor reports the humidity to be 85% and the other reports 82%. The temperature of both sensors was most of the time the same or nearly the same, only varying between them with 0.1 degree Celsius.

To conclude, the reported humidity for neither of them is very accurate. One reports a humidity that is 7% too high, the other is 10% too high. The precision or noise level also differs quite a bit. The noisy sensor sends more frequent updates over zigbee, which I expect to be reflected in a shorter battery life.

Mixing multiple microphones for hybrid meetings

Behringer MX400 as summing mixer

During the last 1.5 year I have been working mostly from home, as all of my direct colleagues. Initially it took us some time getting used to doing our group meetings online, but by now we know how to make those pleasant, inclusive and efficient. Now that many people are vaccinated, we expect/hope that we’ll soon be able to get back to the university for work. However, there are a few aspects of online meetings that I value and hope to can maintain. The travel time is much less, making it easier to quickly join a meeting that otherwise would be held on the other side of campus. It is rather trivial to have people join from abroad, e.g. previous colleagues that want to keep their connection and contribute to the Donders knowledge and culture. Everyone can share their screen much easier. The chat is used to post background material, links to relevant papers, etc. Consequently, I expect that we will not all of a sudden switch back to in real-life meetings, but rather we will have a (possibly infinite) period in which some people attend in real-life, and others online.

In our MEG meeting and the hackathon we have experimented with different aspects of hybrid meetings and documented our findings. We quickly learned that to ensure lively discussions, real-life and online attendees should be able to hear each very well. Spontaneously talk between live participants is easy, but the online participants should be able to hear everything without extra strain and be able to chime in.

One of the core components to support hybrid meetings in the DCCN meeting rooms is a Meeting Owl, which is a smart 360-camera combined with a directional microphone and a speaker. The experience with that is overall quite OK, but its microphone is simply not so good. When listening to meetings from home, the in real-life attendees are more difficult to hear and understand than the other online attendees. Using my experience of setting up a good audio system at home, I started thinking about and planning some improvements to the audio in our DCCN meeting rooms.

The microphone is a very important component to pick up the sound of all in real-life attendees. The “Oval Office”, one of our most used meeting rooms, is suited for about 20 people around an elongated table. We tested a Superlux ECM999 omnidirectional microphone during a meeting in that room and with online participants: according to the people online it was considerably better than the built-in microphone of the Owl. Hence we decided to go for two of these ECM999 microphones, one at each side of the table, connected to a Behringer audio interface. Using two microphones results in a stereo signal, but we don’t want to use them in stereo. Although Zoom supports stereo sound, it is not enabled by default and therefore prone to user errors. The setup in that room should also be compatible with other online platforms, such as Teams, Skype, Jitsi, and other web-based systems. These do not support stereo at all, since they are built for mono microphone input from a webcam or laptop.

Behringer UMC204HD with two microphone/line input channels

Connecting two microphones to the first two channels of the audio interface results in stereo: the 1st channel is audible as the left channel, the 2nd channel as right. At home where I use this interface, I can mix and pan the channels with specialized software such as LadioCast, or a DAW such as Ableton Live. However, the setup at work should be robust and easy to use for other colleagues that are less computer- and audio-savvy. Hence, I figured that the signals should be mixed to mono using an analog solution. Note that I did consider more sophisticated audio interfaces from Focusrite, MOTU, and Steinberg. These have built-in DSP processing that can be configured for effects and mixing. However, the use of specialized software is something I wanted to avoid, and my online explorations of the reviews and manuals pointed in the direction that the DSPs affect the analog outputs of these interfaces but that the digital audio towards the computer is not mixed but remains as separate channels. That make sense for the normal/intended usage of those interfaces, as you would want to record all channels and do the mixing and post-processing in DAW software.

Back of the Behringer UMC404HD with the 4 inserts on the right.

To implement the analog stereo-to-mono conversion, I used the inserts that are offered on the back of the Behringer UMC404HD or the UMC204HD. These inserts use 1/8 inch TRS jacks, where the tip is the “send”, the ring is the “receive” and the sleeve the shared ground. The multi-channel (aka stereo) signal can be converted to mono by feeding the “send” of channel 1 and 2 to a small line-level mixer, and feeding the mixed or summed signal back to the “receive” of both channel 1 and 2.

Behringer MX400 inside view

Specialized summing mixers can get quite expensive, but any line-level mixer should be able to do the job. I opted for a 4-channel mono Behringer MX400 that I still had lying around. Rather than ordering or making specialized insert cables and a output Y-splitter to get the mixed signal back into the inserts, I modified the MX400 mixer. It has five female jacks (4 mono inputs, 1 mono output), and those happen to be TRS jacks with only the tip and the sleeve connected and the ring not in use.

By connecting the tip of the output (on the right in the photo below) to the ring of the 4 inputs, and by using stereo patch cables between the UMC404HD inserts and the MX400 inputs, the mixed/summed output is presented back on all channels of the audio interface just prior to the ADC. Consequently, multiple microphones can be connected to the UMC404HD, but the computer will receive the same summed/mixed signal on all 4 digital channels. The online meeting software (Zoom, Skype, etc) will use the first channel of the audio device that you select, which now contains the analog sum of all connected microphones.

Behringer MX400 modification
Modification to the MX400 to send its output back to the inserts; now to be used with stereo patch cables

Wireless classroom conference microphone system – #5

This post is part of a series on designing a wireless microphone system for hybrid online meetings, i.e. with some people present in person and others present online. See also the previous post in this series.

So far I have built and experimented with 4 wifi microphones, including an on/off switch and a rechargeable LiPo battery. I also added a magnetic name tag holder like this to the back of each of the microphones, allowing them to be mounted on a shirt or the the lapel of a jacket. The most relevant parts comprise an INMP441 microphone connected to a Lolin32 lite board. I have a few more wired up with just the Lolin32 board and the microphone to allow testing a larger number.

I have also implemented a Python based server that is running on a Raspberry Pi zero W, which also functions as Wifi access point. The audio server buffers and mixes the incoming signal from the different microphones and plays it on a HifiBerry DAC+ zero audio card. The output is a line level voltage, strong enough to drive a headphone, and with some attenuation also suitable to feed into the microphone input of a low-cost USB headset adapter. The whole system works as expected, although the noise level of the microphones is higher than I had hoped. My guess is that it is in part due to the microphone being so close to the ESP8266 antenna. Also, the wires between the microcontroller and the microphone run over the Lolin32 board without any shielding, probably picking up EM interference.

The Arduino source code, the Python audio server code, and the Fusion360 CAD design files are available from the wifimic repository on Github.

The fact that it works with an USB headset adapter like this, i.e. a miniature external sound card, demonstrates that the device can also be connected to the standard Windows laptop “pink” microphone input.

My MacBook has a TRRS combined audio input/output and the TRS (stereo) cable that comes from the HifiBerry DAC audio card is not recognized as microphone when I plug it in, but over the USB headset adapter it works fine. There are Y-adapters to split the TRRS input into TRS for the headphone and a TS for the microphone that would allow connecting it. However, the Python audio server also works fine on macOS, which has the advantage that I can investigate the microphone audio signals in full quality. Rather than first converting the sound to a analog line-out on the Raspberry Pi, and then back into a digital representation by the USB headset adapter, I can use BlackHole or Soundflower to get the digital audio stream as it is generated by the microphone. A cool feature of BlackHole and Soundflower is that they support many channels. With some modifications to the Python server script, it will also be possible to stream the audio output of each microphone to each own channel, and record them with Audacity.

Wireless classroom conference microphone system – #4

This post is part of a series on designing a wireless microphone system for hybrid online meetings, i.e. with some people present in person and others present online. See also the previous and next post in this series.

I want to design a wireless clip-on “Lapel” microphone based on the LOLIN32 lite board and the INMP441 I2S microphone module (not to be confused with the INMP 411, which has analog output). Given the size of the board (about 25 by 50 mm), an 802040 or possibly an 802540 Lithium Polymer battery would be a nice match. These LiPo cells are 8 mm thick, 20 (or 25) mm wide, and 40 mm long. In a few iterations, I designed a simple enclosure in Fusion360 and 3D printed them.

ESP32 wifi microphone enclosure

ESP32 wifi microphone enclosure

The box has a port in the top for the microphone; on the inside are two rails to keep the ESP32 board in place. The microphone is mounted in a small holder that clips perpendicular onto the antenna-side of the ESP32 board. The micro-USB connector is exposed at the bottom, this allows charging the LiPo battery. I expect that this design will also allow making a docking station for charging multiple microphones at once, for example, using these male micro-USB connectors. The first versions (red and blue) did not have an on-off switch; I added these in the later versions of the design (green, yellow).

The ESP32 wifi microphone enclosure is about 57x28x18 mm in size. For mounting the microphone on a lapel or in the neck of a shirt, I considered 3D printing a clip. However, I know from experience that 3D printing a clip with exactly the right flexibility is not so simple, since that depends on the properties of the filament. The clip would also make the 3D printing and assembly more complex. I think that a magnetic name badge holder will be a good alternative to a clip for mounting the microphone to your clothing; it has the advantage that the microphone can be positioned more flexible, especially for informal clothing such as t-shirts. Using double-sided adhesive tape the magnetic name badge holder can be attached to the recesses at the back of the 3D printed microphone enclosure.

magnetic name badge holder

magnetic name badge holder

Wireless classroom conference microphone system – #3

This post is part of a series on designing a wireless microphone system for hybrid online meetings, i.e. with some people present in person and others present online. See also the previous and next post in this series.

I evaluated various small ESP32 and ESP8266 development boards for use in a clip-on microphone. The requirements are that it should be cheap, it should be small, and it should include a charger circuit for a LiPo battery. The most suitable candidates are the WEMOS D1 mini pro and the WEMOS LOLIN32 lite.

LOLIN32 lite versus D1 mini pro

The first is based on an ESP8266 and the advantage is that it is officially available from the WEMOS store. The second is based on the ESP32, has the advantage of a faster MCU, includes Bluetooth (although I don’t have plans for that at the moment) and is even cheaper (about €2.50, whereas the Wemos D1 pro is about €5.00). The disadvantage of the LOLIN32 lite however is that according to the ESP32 page on Wikipedia it is retired and hence not available through an official WEMOS channel. There are many clones of the LOLIN32 lite board available on AliExpress as LOLIN32 lite or as LOLIN32, however, the quality of these clones may vary.

I removed the battery connector from the WEMOS board (that is on the right in the photo) to reduce the height. Furthermore, using a Dremel tool I made a small indentation in the board: this allows passing the wires from the battery cables. Both boards feature a JST-PH-2.0 battery connector that points along the axis of the board in the same direction as the micro-USB. This arrangement of the connectors makes it impossible to plug in a battery, while at the same time having the micro-USB connector flush to the side of an enclosure. To keep the assembly as simple as possible, I want external access to the USB connector for charging, so instead of using the battery connector, I will solder the wires from the battery straight onto the board. The JST-PH-2.0 connector comes off easily with a pair of pliers and a little force.

Note that the LOLIN32 lite should not be confused with the D32 or the D32 pro version. Here is a comparison with the boards side-by-side from left to right the Wemos D1 mini, the Wemos D1 mini pro, the LOLIN32 lite, the LOLIN32 pro, and the LOLIN D32.

comparison of different LOLIN and WEMOS boards

LOLIN board comparison

To evaluate them, I ordered the ESP32-based LOLIN32 and the ESP8266-based D1 mini pro together with some INMP441 I2S microphone modules. Using the Arduino example code, I implemented a simple microphone with both of them. I figured out that there is more online documentation and more examples of the I2S interface with the ESP32; for the ESP8266 there is less documentation (e.g. it is not mentioned here) and it seems from this example that the I2S implementation is limited to 16 bits.

I also experimented with the LOLIN32 and the Adafruit SPH0645 I2S microphone module following this example. Compared to the INMP441, the SPH0645 gave me a harder time with the byte-swapping and scaling of the digital signal. Probably in the end my problems mainly had to do with some I2S timing incompatibility between the ESP32 and the SPH0645. In the best situation, there were still problems with the digital signal randomly jumping up and down, especially at larger input volumes.

For the microphones I therefore decided to continue with the INMP441 modules, which are available for about €1.70.

INMP441 MEMS microphone module with I2S interface

Wireless classroom conference microphone system – #2

This post is part of a series on designing a wireless microphone system for hybrid online meetings, i.e. with some people present in person and others present online. See also the previous and next post in this series.

Pondering about wireless microphones for a classroom or for a larger scale conference/meeting room, I identified some requirements:

  • it has scale to a classroom with 20 or 30 attendees
  • it has to be cheap per microphone, rather in the range of €10 than €100
  • it has to be simple to use, as there is no sound technician to control a mixing console
  • it has to integrate with online meeting software as if it were a regular micophone
  • it has to be portable, so that I can take it to any class or meeting room
  • it has to be DIY and easy to build with already available components

Imagine that you would have a number of rechargeable clip-on microphones that all transmit their audio wirelessly to a single base station. The base station could also act as a charging station, i.e. when not in use the microphones would be docked in it. The base station would be connected to the central laptop/computer as if it is a single external microphone. Bluetooth lapel microphones exist, but Bluetooth does not allow connecting a lot of microphones to the same computer. Proprietary radio systems such as used by audio companies like Sennheiser are not DIY friendly. There are easy to use RF modules, but those are more suited for IoT applications and not streaming audio. This actually sounds like an ideal application for a 5G device-to-device network, but components for those are not easily available yet.

I think wifi would have enough bandwidth and would be able to support a large number of clip-on microphones: a dedicated wifi access point has no problems dealing with 50 to 100 connected clients. It can be a dedicated/closed network since there is no reason to have the microphones connected to the internet, except perhaps to receive software updates. Also, other devices such as laptops don’t have to connect to this wifi network, except when a web interface is considered for configuration and audio mixing (see below).

For the clip-on microphones, I am considering using an ESP32 connected to an I2S MEMS microphone and a small (e.g 500 mAh) LiPo battery. These can be housed in a custom 3D printed case with a clip to attach it to the clothing, and a hiddeon connector at the bottom for charging in the docking bay. The ESP32 needs firmware that sets up and maintains the wifi connection, processes the I2S audio, does threshold detection and, when loud enough, transmits the audio over wifi.

For the base or docking station and wifi access point, I am considering a Raspberry Pi Zero W combined with a HifiBerry DAC+ Zero. The line-level output of the HifiBerry would be provided on a standard 3.5 mm female jack, such that a standard TRS or TRRS jack-to-jack cable can be used to connect the base station output to the laptop microphone input. The base station requires software that receives the (UDP?) wifi input streams of all ESP32 modules, normalizes them, and mixes them into a single audio output.

Some additional features I was thinking of for the base station are a volume indicator, e.g., a Neopixel that turns green-orange-red). Furthermore there could be a mute button for every microphone, a solo button (muting all but the one that has been selected), and knobs to adjust the volume level for each channel. These could be implemented using physical buttons placed next to the charging bays in the dock, but also through a web interface.

Usability in a classroom or meeting room by people that have no technical understanding of the system is also crucial. If the base station would have physical mute buttons and/or volume knobs for each of the channels, the clip-on microphone modules must be clearly labeled/numbered. Possibly they could all be 3D printed in a different color and the corresponding charging bay (with the knob/button next to it) in the base station would then have the same color. The individual microphones don’t have to be recognizable if audio mixing or muting is not needed.

Considering that this system might be used at the same time in multiple neighboring classrooms, the wifi signal amplitude should be strong enough to have within classroom reception, but as weak as possible to not interfere between classrooms and with the regular internet wifi.

I think that the base station could be made for about 50-100 euro (hardware costs only, and mainly depending on whether it has buttons and knobs for each channel) and that each clip-on microphone can be made for about 10-15 Euro. For a system comprised of 30 clip-on microphones to accommodate a complete classroom that would amount to €350-550. For a smaller meeting room system with 8 clip-on microphones, it would be around €150.

There is quite some development and testing needed for this. For prototyping I have ordered an Adafruit HUZZAH32 and a I2S microphone breakout board from a local and fast (and also more expensive) supplier and some comparable but cheaper components from Aliexpress. Let’s start with a single microphone, similar to this or this baby monitor. If I can get that to work with a Raspberry Pi, the next step would be to check how well that scales to a larger number of ESP32 microphones.

Wireless classroom conference microphone system – #1

This post is part of a series on designing a wireless microphone system for hybrid online meetings, i.e. with some people present in person and others present online. See also the next post in this series.

Update 22 November 2020 – I split the original post into two pieces to make it easier to follow up and added some information about commercial solutions.

I was chatting with my daughter about the challenges of doing hybrid Zoom or Teams meetings. She was not allowed to go to school for a few days and had to follow lessons online, with the teacher and most students in the class. And I was still stuck in my attic, organizing my own university teaching and meetings remotely. Recently I went to work a few times for meetings, but only a few people came to work in person, and most attended online through Zoom. This is similar to the current school situation for my daughter, where most kids attend in person but some attend online on Teams. I expect that we will have these hybrid online/in-person meetings for quite some time to come; perhaps they might even become the new “normal”.

The challenge with hybrid in-person and online meetings is mainly in the physical room where multiple people are attending in person. The online attendees simply connect to the online meeting the same way as if it were a 100% online meeting. The people present in real life also have their laptops in front of them with the webcam on, but with the speakers and microphone muted. This allows online attendees to see everyone, also those people in the physical room. Only one person in the physical room unmutes the speakers and microphone. This allows the noise- and feedback-suppression of the video conferencing system to do its work and not to amplify the voice of the local attendees through the speakers. If you would have multiple laptops with the speakers and microphones on, you will hear echo’s, and the sound will start feeding back, creating lots of noise.

Amplifying the audio from the online attendees to the people in the physical room is easy, e.g., using some external speakers connected to the laptop. The problem however is with picking up the voice from the attendees in the physical room. In smaller meeting rooms at the university we use table microphones, like the USB Samson UB1 or the analog Philips LFH 9172 which can be daisy-chained. We also have one room with a Polycom video conferencing setup, and are experimenting with microphone arrays for the larger meeting rooms. However, these microphone systems are still quite expensive, not so portable due to the required cabling, and they work best when placed in the middle of a round table with an equal distance to all speakers. I.e., these systems are OK for traditional conference rooms, but not for classrooms or more ad-hoc meeting setups with multiple people in complex spatial arrangements, or when people have to keep a distance from each other.

What if we could give everyone in the room a wireless clip-on lapel microphone? Companies like Sennheiser and Shure have wireless microphone systems for studios and stage performances, but these don’t scale well to a large number of in-person attendees in a classroom or meeting, at least not financially: imagine equipping all kids in a classroom with a €300 microphone.

Shure microfex

Shure microfex

The Shure Microflex wireless conference system provides a solution for relatively flexible setups for conference calls with multiple people on-site and others online. However, it consists of rather bulky wireless gooseneck microphones that are placed on the table in front of the participants. Although being wireless makes it more portable that regular conference systems that have a fixed installation, I don’t think it can be easily taken from one classroom or meeting room to another.

RØDE wireless go

RØDE wireless go

The system I have in mind is perhaps more comparable to the RØDE Wireless GO, which aims at online content creators and consists of a compact clip-on transmitter and a receiver with an analog output that plugs into a camera. The transmitter is to be worn by the presenter or the person being interviewed and has a microphone built-in. Alternatively, you can connect a separate lapel microphone through a 3.5 mm jack plug. The system operates in the 2.4 GHz range, and according to the specifications you can use up to 8 systems in the same location. However, note that this would then consist of 8 transmitters/microphones and 8 receivers, whereas I am looking for a solution with many microphones connected to a single receiver, without an audio mixing panel.

Shure ULX-D digital wireless system

Shure ULX-D digital wireless system

A more professional system that shares some similarities with what I have in mind is the Shure ULX-D digital wireless system. This comes with a bodypack and handheld microphone for mobile use, and a gooseneck or boundary microphone for use on a table. It also includes various receivers, with up to four channels. Multiple systems can be combined and using a rather fancy assignment/management system for the frequencies at which the devices operate, it can scale to a large number of microphones.

Using the Bela to measure the frequency response

Bela is a maker platform for creating beautiful interactions. It consists of a Beaglebone black, with a shield or hat that has 2 audio inputs, 2 audio outputs, 8 analog inputs, and 8 analog outputs. It is complemented with a very slick web interface that allows you to write and very easily compile and run your code. And very cool is that the web interface features an oscilloscope.

I am planning to build a purely analog EEC/EMG/ECG amplifier, similar to this design on Instructables. As that involves making choices on the filter settings: a low-pass filter to remove electrode drift, a notch filter for line noise, high-pass anti-aliasing filter matched to audible frequencies. Hence I started thinking on how to determine the combined effect of all those filters, together with the multiple amplifier stages. It occurred to me that the Bela can act both as a signal generator and as a digital recorder and oscilloscope.

Bela and breadboard with fiter

On this GitHub page I am sharing a Bela project that outputs a sine wave on the analog output, which can be fed through an external circuit, and subsequently measured using the analog inputs. The project computes a real-time discrete Fourier transform of the output signal and compares the amplitude and phase to the input signal. Using a LaunchControl XL MIDI controller (or alternatively using a small EEGsynth path for an on-screen MIDI controller), I can select the frequency, and start/stop a sweep over the whole frequency range. The amplitude and phase response at each frequency is logged to disk.

Here you can see the frequency response when the Bela analog output is directly fed into the analog input. It is very nicely uniform with a unit gain and no observable phase shift up to the upper limit of 22050Hz.

Bode plot of frequency response

And here is the frequency response when the Bela audio (headphone) output is directly fed into the audio input. You can see that – as expected – it is DC-coupled with a high-pass filter and with an anti-aliasing filter at the high end.

Bode plot of frequency response

From the Bode plot figures it is clear that something funky is going on with the phase estimates. I suspect that to be due to numerical errors accumulating in my computation of the DFT. There are fancy algorithms for single bin sliding DFTs. However, I want the DFT algorithm to run in the (hard) real-time audio loop, which means that it should have a very low computational cost. Furthermore, I want it to be memory efficient, which means that I don’t want to hold a large buffer with many samples.

I also tried it with a simple passive first-order low-pass filter on a breadboard with a 100nF capacitor and a 10kOhm resistor, which should have a (theoretical) cutoff frequency of 159Hz. The resulting frequency response up to 5000Hz is given here:

Bode plot of frequency response

And If I connect the same capacitor and resistor to form a high-pass filter, I get the following frequency response up to 5000Hz . Note that the output of the high-pass filter cannot fully be recorded with the analog input (which is 0-4V only), hence I used the audio input.

Bode plot of frequency response

Improved touch-proof enclosure for OpenBCI

While assembling the touch-proof enclosure for the OpenBCI Cython/Ganglion biosensing amplifier boards, I realized that with the board in the middle of the enclosure, there is little space for the Dupont wires connecting the pins of the OpenBCI to the touch-proof connectors. Trying to squeeze the board in place, some of the solder joints broke off. After repeatedly re-soldering the wires to the connectors, I was able to get it all properly in place. However,  this was definitely a design flaw.

I designed a new version that has the OpenBCI PCB board rotated by 45 degrees and shifted a bit to the corner. This gives more space for the wires and reduces the stress on the joints. Here you can see the new enclosure printed for a 4-channel Ganglion board.

OpenBCI touch-proof enclosure version 3 – with the PCB board in the corner

Compared to the previous one for the Cython, the difference is also in the colour of the connectors: I used 4 pairs of red and blue connectors for each bipolar channel, one black connector for ground, and one blue connector as the common reference. Using the 4 channels (i.e. the red connectors) relative to the common reference requires toggling the micro-switches on the Ganglion PCB board. Using a common reference is handier for EEG measurements, whereas the bipolar configuration is convenient for ECG/EMG, but with some extra electrodes also works fine for EEG. The Cython version has 8 red connectors, one blue connector for the reference, and one black connector for ground.

Another change is aesthetic; thanks to the nice post and configuration files from Rainer I figured out how to 3D print with multiple colours. I updated the Fusion 360 design of the enclosure to include the EEGsynth logo. The logo is embedded in blue and white in the black background of the box.

logo embedded in the 3D-printed enclosure

The 3D design can be downloaded from Thingiverse.

12 Volt trigger for NAD-D3020 amplifier

Update 3 January 2021 – mention that I am now using Tasmota firmware.

The NAD D3020 is a hybrid digital audio amplifier with a combination of analog and digital inputs. I have been using it for quite some years now to play the sound of my Samsung smart TV over the living room speakers and for digital radio, iTunes and Spotify from my Mac mini. The Samsung is connected with an optical Toslink cable, the Mac mini is connected with a USB cable.

In the way the D3020 is placed in our media cabinet, its on/off button is not so easy to access. The D3020 remote control is really crappy and I find it anyway annoying to have to use multiple remotes to switch the power of all devices. Also, the status LEDs of the D3020 are dim and got considerably worse over time, especially for the OPT1 and the USB inputs that are for the TV and the Mac mini, and hence on most of the time. I guess that it uses OLEDs, which have degraded over time. Consequently, it happened quite often that we forgot to switch the amplifier off for the night.

However, the D3020 features a 12V trigger input port which allows the amplifier to be switched automatically on/off along with other gear. Of course, neither TV nor the Mac mini has a 12V output port, but both are connected to my home network; hence it is possible to detect over the network whether these are powered on.

I built an ESP8266-based trigger which allows switching D3020 using the 12V trigger. This is combined with a small Node.js application running on a Raspberry Pi which pings my TV and my Mac mini over the network every 5 seconds. If either one returns the ping – and hence is powered on – an HTTP request is made to the ESP8266 to switch the trigger on. If neither TV nor Mac mini returns the ping, an HTTP request switches the trigger off.

The hardware is implemented using a Wemos D1-mini ESP8266 board. The ESP8266 uses 3.3V logic which is not enough. However, 5V turns out to be sufficient to trigger the amplifier. I tried using a logic level converter, but it did not produce enough output current on the 5V side, causing the voltage to sag and remain below the trigger threshold. Therefore I designed a circuit in which one of the 3.3V GPIO pins is used to switch an opamp. The output side of the opamp is connected to the 5V USB input voltage of the Wemos board. Although the output voltage does not fully reach 5V, it turns out to be enough for the trigger input of the D3020.

The design follows that of a MIDI input, see here on Sparkfun and here on the Teensy forum. The difference is that the optocoupler input comes from the microcontroller GPOI pin at 3.3V, and the output is pulled up to 5V from the Vin pin. I also added a diode to protect the electronics from reverse voltage spikes that might come from the amplifier.


The list of components is:

The PC900v datasheet specifies a maximum forward current of 50 mA, which would require a 66 Ohm resistor at 3.3V. However, the maximum current that can be drawn from a single GPIO pin is 12mA, hence I decided to use a 270 Ohm resistor.

Here you can see the design on a breadboard for testing:

And the final implementation just prior to fixing it with hot glue:

The firmware for the ESP8266 that I wrote myself can be found here on Github. However, around 2019 I switched to Tasmota, which is a generic open-source firmware for ESP8266 devices like these.

I am using this ESP8266-based 12V trigger in combination with a small node.js script running on a Raspberry Pi that constantly monitors whether either my TV or Mac mini are powered on. The code for this is found in on here on Github.