In December 2019, an experiment was started to improve the light sensor measurements. In this experiment, a specially prepared station is mounted at the Geophysicis Institute of the University of Bergen, in the vicinity of a higher-quality calibrated sensor for comparison.
The experiment is conducted with station 204. This station has a standard humidity/temperature sensor and two identical light sensors, except for the diffusor:
- The first has a standard diffusor: Half a pingpong ball glued on the enclosure top, with the BPW34 sensor mounted directly on the enclosure top (with two holes in the top for the sensor pins).
- The second has a rectangle of 3mm teflon hotglued on the enclosure top. The BPW34 sensor is mounted in a hole in the enclosure top (top of the sensor flush with the top of the enclosure).
Below, and in the code, these are referred to as "Lux1" and "Lux2" respectively.
Both sensors use the same BPW34 photodiode and the same circuit to connect to the microcontroller. This is similar to the circuit specified in the schematic and used by most stations so far, except:
- A capacitor is added to the analog input to stabilize the value read.
- The 10kΩ switchable resistor is replaced by a 3.3kΩ resistor to extend the upper range of the sensor.
The resulting circuit looks like this (below is the circuit for Lux2, for Lux1 it looks the same):
For Lux1, the existing lux sensor circuit was modified with the above changes. For Lux2, the soil sensor circuit was easy to repurpose to become the above circuit. On the board, it looks like this:
Stabilization capacitor
Without the added capacitor, the value seems to fluctuate significantly when doing continuous measurements (tested at 1Hz) in stable light conditions. This instability is especially noticable at lower lux values when only the 100k resistor is enabled, presumably because then the current is low. This instability is partly explained by the input current into the ADC, which has an internal 14pF store-and-forward capacitor that must be charged without influencing the input voltage too much. However, for this a 10nF should be sufficient to mostly stabilize the read value. When testing 10nF and 100nF, the 10nF would still see ±50ADC counts of noise, reduced to ±15 counts when using 100nF (later, in different conditions, we saw ±3 counts as well). This suggests that there are additional error sources. These tests have only been done in artificial light, in imperfect circumstances, so there might also have been small actual variations in light (e.g. high-frequency blinking of lamps) that are now egalized by the capacitor, but that is probably also good in practice.
Modified resistor value
The maximum lux value depends on the maximum ADC voltage (Vcc: 3.3V), the sensor sensitivity (from the datasheet: 8.9nA/lx) and the smallest resistor value (100kΩ parallel with 10kΩ or 3.3kΩ). With the original resistor, the maximum value is:
3.3V / (100kΩ // 10kΩ) / 8.9nA/lx = 40786
In practice, this value was reached on sunny days, resulting in clipped measurements. With the new resistor, the maximum value becomes around three times higher:
3.3V / (100kΩ // 3.3kΩ) / 8.9nA/lx = 116067
Calibration and attenuation
The lux calculation in the firmware is made using the sensor sensitivity specified in the datasheet. This is the sensitivity for the raw sensor, without any diffusor. In practice, the pingpongball-diffusor seems to hardly attenuate the signal, based on some rough tests with a separate lux meter. The teflon-diffusor, does significantly attenuate the signal, meaning less light reaches the sensor and values read will be lower (leading to an even more higher effective lux value that can be measured).
The current code has a very rough correction for the calculated lux value for Lux2 (the value is multiplied by 3, based on earlier testing by the GFI). This is just to get a roughly-correct lux value, but should not be taken as the definitive value. In analysis, the raw ADC values should probably be used as much as possible. Using those, we can probably determine the correct sensitivity to use for both sensors.
Software
For this experiment, the firmware was modified to read two lux sensors and measure their values. The currently used version of the code can be found in the git repository.
Some notable changes are:
- In addition to a calculated lux value, the raw ADC values and the ADC range used is also transmitted.
- Instead of every 15 minutes, the station measures and transmits every minute. To prevent excessive airtime usage, the transmission happens at SF7.
- The firmware does an extra supply voltage measurement while transmitting.
Supply voltage under load
Normally, stations measure their supply voltage as part of the data collection for a measurement. However, since the supply voltage is measured after the regulator, it will usually read a constant voltage and only when the batteries are nearly depleted, show a lower voltage (sometimes just for one or a handful of measurements before the power runs out).
To get an earlier warning, it might be useful to measure the supply voltage under load. When a higher current is drawn, batteries have a tendency to slightly drop their voltage, which show up as a lower supply voltage earlier. By measuring while the radio is transmitting, the load should be higher and the voltage might drop.
To implement this, a flag in the LoRaWAN library is checked that should indicate transmission has started. To give the voltage some time to react, the measurement happens 10ms after transmission starts. It might be even better to measure as late as possible (e.g. just before TX is complete), but that would be a bit more tricky to implement. Note that this was rather quickly implemented and the battery behaviour and timing was not properly verified, so this might not yet be perfect.
Because measurement happens during transmission, the value can not be added included in that same transmission. This means that the value included in a measurement is the vcc under load during the previous transmission.
This change is not directly related to light sensor testing, but since this station transmits every minute, it will run out of battery
sooner which made this a good candidate for this additional experiment (both because an early battery warning would be useful, and because battery depletion is expected sooner, which allows for a shorter testing cycle).
Accessing the data
Data is collected using the normal Meetjestad infrastructure, which was enhanced to allow sending arbitrary extra values to enable this experiment (and any other experiments that need it).
The calculated value of the Lux1 sensor is returned in the "lux" field. The "extra" field returns all data for both sensors. This is a field that contains a number of unlabeled values. In order, this is:
- The calculated lux value for Lux1 (In lux)
- The raw ADC value read for Lux1 (0-1023)
- The measurement range used for Lux1 (1-3)
- The calculated lux value for Lux2 (In lux, see caveats about attenuation above)
- The raw ADC value read for Lux2 (0-1023)
- The measurement range used for Lux2 (1-3)
- The vcc value measured under load (In multiples of 10mV, see above)
Interpreting the data
To understand what the data values mean, it is relevant to understand how a measurement is performed. When the photodiode used as the sensor receives light, it starts conducting a current. Ideally, this current is predictable and linear with the light received (8.9nA per lux according to the datasheet). This current is then shunted through a resistor, which results in a voltage that is again lineair with the current. This voltage is measured with the ADC inside the microcontroller resulting in a value between 0 (0V) and 1023 (maximum voltage). The resistor value can be switched between 100kΩ and 3194Ω (100kΩ in parallel with 3.3kΩ). The maximum voltage can be switched between 1.1V (using an internal reference) and (nominally) 3.3V (using vcc as a reference).
To measure, three different measurement ranges can be used:
- Using a 100kΩ shunt resistor and a 1.1V ADC range. This allows measuring up to 1.1V / 100kΩ = 11μA (or 10.8nA / raw adc)
- Using a 3194Ω shunt resistor and a 1.1V ADC range. This allows measuring up to 1.1V / 3194Ω = 344μA (337 nA / raw adc)
- Using a 3194Ω shunt resistor and a 3.3V ADC range. This alllows measuring up to 3.3V / 3194 = 1033μA (or 1010nA / raw adc)
Range 1 is tried and used when the raw adc reading is less than 1000. If not, range 2 is tried and used if the reading is less than 1000. If not, range 3 is used.
In the packet, the number of the range that was used is transmitted, along with the raw ADC value for that range (the value for any previous ranges is discarded, since it will be >= 1000 anyway).
Above, nominal values are used. In practice values will be slightly different. According to the 328p datasheet, the 1.1V referenc is typically 1137mV (at 25° and 3.3V VCC). The vcc value might vary, but is measured by the station and sent along with the rest of the data. The resistor values have been measured at measured 98.2kΩ and 3267Ω for Lux1 and 98.3kΩ and 3275Ω for Lux2 (though in unspecified conditions). For detailed analysis, it might be useful to use these values instead. Even then, the values used will not be perfect, so there might be some additional noise in the transition area (e.g. when the light slow increases and switches from range 1 to 2, it might make a small jump or even jump slightly backwards).
To analyze the data, it might be useful to (at least initially) ignore the actual scaling and simply look at the raw ADC value as a measure for the amount of light. Since different ranges have different scalings, this requires analyzing the different ranges in isolation.
Reference measurements
The data of the reference sensor can be viewed and downloaded through
the website of the GFI. On the website, under "Historiske data", there are options for day search, period search and download. The location is "Florida", the measurement is "
Experiment log
2019-12-06: Hardware and software built, initial testing done.