I bought a Brix Refractometer to measure the amount of sugar over time, and found that I could do a single cooking temperature for one hour and get similar sugar levels to the previous recipe.
Also, I made corn milk with this recipe. It wasn’t great. Maybe some more experimentation would help. Weirdly, it smelled exactly like the oat milk.
|Rolled Oats||80 gr|
|Malted Barley||8 gr|
|Canola Oil||22 gr|
*Temperature is critical here; this recipe will be hard to reproduce without an immersion circulator. If the temperature goes much beyond 150F (65C) the enzymes will denature and stop converting the starches in the oats to sugars.
**The toasting is different for different oats. Preheat your oven to 250F (121C), put some oats on a cookie sheet, and set a timer for 4 minutes. At 4 minutes, pull a few oats out to taste. The oats done are when they have a hint of roastiness with no bitter/burnt/off flavor. If they aren’t done, give it another 4 minutes. Different brands of oats we’ve tested have required between 4 and 12 minutes. Instant oats seem to need longer, while fancier non-instant oats need shorter times.
Sarah & I have been experimenting with Cyanotypes lately. We picked up a Photographer’s Formulary Cyanotype Kit. We also got a pretty large contact printing frame. I had built a really basic exposure unit a while back with some UV LED tape.
We figured out with test strips that to get a rich dark blue we needed an exposure time of about 75 minutes. 75 minutes is kind of a long time though, so we wanted to try using the sun, but of course, the sun’s output is pretty variable — clouds might block the sun for a while and summer vs. winter has pretty significant output changes.
I had the idea that you could use a sensor to measure the UV light and count up how much UV had hit the cyanotype and have consistent exposures no matter how much the sun’s output changes.
A bit of Python and I had a prototype. Briefly, the sensor takes one reading per second, adds that value to a running tally, and sees how many more seconds it will take to hit the desired exposure.
Interestingly, the hardest part here was figuring out how to time one second. If you have a basic loop that works like this:
Get start time
Measure Light Intensity
Sleep until it’s one second later than the start time
You’ll discover there’s overhead on making the loop happen, getting the start time, and even on the sleep. So my loops were actually taking longer than a second. (It seems like the right way to do this is with Timer Interrupts? In the the second version of this prototype, I’m going to look into that.)
So, to get my seconds closer to one second, every time the meter is used, it saves the average time a loop took, and reduces the loop time proportionally so the actual loop time is closer to 1 second. Based on some testing I think the timing is now accurate to 0.0001± second.
In any case, the light meter seems to work pretty well, though if the sun is at a particularly low angle, the timing seems to be off. I need to experiment more.
I shot a timelapse of Sarah binding her thesis. I used the timelapse system described in a previous post. I was left with about 12,000 frames, but only some of them had Sarah actually doing work in them.
I needed to build a timelapse of Sarah working. I tried several libraries that look for human shapes in photos, but they ended up being, inaccurate, slow, or overly complicated.
I had tested out Imagga for an unrelated tagging project a few years back, and it looks like at some point in the interim they added some face detection tools. With a basic account you get 1000 API calls per month, and after some testing it looked like Imagga would be fast and easy to use.
This passes an image to Imagga asking it to find a human, and then just counts the number of characters in the returned data. The data is pretty short if there’s no humans, and it’s longer if there is a human.
I use cloud storage for my work-related photography. I make a folder of thumbnails of all the raw images from each shoot so that I don’t have to download gigs of raws to look back through the photos, just a few tens of megs of thumbnail jpegs.
This is by far the fastest way I’ve found to generate a set of thumbnails for raw photos. dcraw doesn’t actually decode the raw for this it just extracts the thumbnail that’s embedded in the raw. ImageMacik’s mogrify resizes the images, tweaks the brightness/contrast/color a bit and saves it down.
So, the v2 setup is better by far than the first setup. The enclosure is actually waterproof, the images are higher quality, and the system runs even if the power goes out.
v3 is a slight upgrade adding a high dynamic range light sensor to speed up figuring out proper exposure for the photos.
|Raspberry Pi Zero Camera Cable||$2.99||https://chicagodist.com/|
|Raspberry Pi HQ Camera Wide Angle Lens||$30.00||https://chicagodist.com/|
|Raspberry Pi HQ Camera||$50.00||https://chicagodist.com/|
|Light Sensor – Adafruit TSL2591||$6.95||https://www.adafruit.com/|
|Raspberry Pi Zero W||$10.00||https://www.adafruit.com/|
|Power Bank – RavPower 10000mAh 5V/3A*||$33.27||https://www.ravpower.com/|
|MicroSD – SanDisk 128GB Extreme||$23.99||https://www.amazon.com/|
|USB Wall Charger – Ailkin||$4.50||https://www.amazon.com/|
|6″ USB A to Micro USB cable||$4.00||https://www.amazon.com/|
|6″ USB A to USB C cable||$4.00||https://www.amazon.com/|
|Lens Filter – Marumi 46mm EXUS**||$34.68||https://www.amazon.com/|
|Project Case Hammond – RZ0269C||$17.30||https://mouser.com/|
*This power bank can do pass-through charging, so acts like a UPS for the Pi. In initial testing it seems to run the Pi and camera for around 24 hours.
**Good filters are really expensive. This filter seems to shed water and self-clean much better than others I’ve tried.
Annoyingly, the software has gotten complicated. It seems like you could just tell the camera to shoot a photo every 10 minutes and you’d be good to go, but the camera’s auto exposure only works during well-lit parts of the day. Once you hit dusk the photos are solid black. So, I had to write an auto-exposure system.
Also, There are a couple of bugs with the newer RaspberryPi HQ Camera software and long exposures take 4x the shutter speed. I think I’ve found workarounds for most of the bugs, and with a non-obvious set of options you can reduce the long exposure time to 2x the shutter speed.
Hopefully in the next few weeks I’ll have some long-term samples from the garden timelapse. Sarah & I moved late December, so the timelapse final garden timelapse will run 05/19/2020 – 12/21/2020.