Quantcast
Channel: OpenEnergyMonitor aggregator
Viewing all 328 articles
Browse latest View live

JeeLabs: Turning the page on 2015

$
0
0

As the last few days of 2015 pass, I’d like to reflect on the recent past but also look forward to things to come. For one, the JeeLabs weblog is now thriving again: the new weekly post-plus-articles format has turned out to suit me well. It keeps me going, it’s oodles of fun to do, and it avoids that previous trap of getting forced into too-frequent daily commitments.

Apart from a summer break, every week in 2015 has been an opportunity to explore and experiment with physical computing topics, several ARM µCs, and various software ideas.

Here are the last two articles for 2015:

I’d like to close off 2015 with a deeply worrisome but nevertheless hopeful note. While this is a totally technology-focused weblog, it has not escaped me that we live in very troubling times. Never in history have so many people been on the run, fleeing home and country for the most basic of all human needs: a safe place to live. We’ve all seen Aryan Kurdi’s fate:

Aylan kurdi

An innocent three-year old boy, born in the wrong place at the wrong time, trying to escape from armed conflict. He could have been me, he could have been you. His tragic fate and that of many others could have been avoided. Europe offers a peaceful and prosperous home for half a billion people - accommodating one percent more is the least we can do.

I’m proud to see countries rise to the occasion, and put humanity and the planet first. Let’s cherish our compassion as well as our passion, our understanding as well as our creativity. For 2016, I wish you and yours a very open, respectful, and empathy-rich planet.


JeeLabs: Preparing for a new weblog

$
0
0

Now that the WordPress-powered JeeLabs weblog has been replaced by a static set of pages, we need a way to continue adding new weblog posts and articles. The current approach is clearly an unsustainable “hack”: there’s still a copy of WP running the original weblog on a local VM, from which a new snapshot is tediously being created once a week.

Luckily, there are many dozens of static website generators to choose from these days.

One of them is Hugo, an open source project on GitHub - it’s in active development:

Hugo logo

It’s written in Go and it will run on just about any platform as stand-alone executable. Hugo is extremely fast (a few seconds to generate a 5000-page site), which is an important long-term consideration here (especially if the 1500+ older posts ever get imported), and it’s quite flexible, in terms of generating (paginated) lists, archives, tags, categories, etc.

There’s a nice development mode, whereby you run Hugo in live server mode, at which point it’ll auto-refresh the browser from its in-memory cache whenever any file changes. Very effective during development of the layout, but also when writing articles of course.

The question is: how do you switch to a new site with minimal disruption?

One solution would be a full conversion: extract all the existing pages, convert them, and re-publish them as part of the new site. But there are some problems with that: for one, the conversion might be tricky, lots of pages may need to be checked to make sure everything came across as expected. This places a heavy burden on getting the conversion just right.

The second problem is that URLs need to be kept intact - existing links should continue to work. This is not trivial, since WP’s auto-generation of a “slug” for each page is not necessarily identical to how Hugo does things. Again, lots of checks would be needed to verify that everything went well, i.e. accented characters and other non-ASCII-7 stuff.

The alternative is to accept the old and leave it for what it is, gently “folding in” the new, like you would with whipped cream in chocolate mousse :)

Which is exactly what is going to be done, starting 2016: all the old pages, except the top-level “index.html” will be kept as is, with the new weblog carefully generating pages which do not conflict with any of the old pages and URLs. The old (“classic”?) site will continue to supply all the existing posts and articles from 2008 through 2015.

The new site layout has been designed from scratch, and was made to look very similar:

Weblog wide

The nice JeeLabs banner and 3-post front page layout have been retained, as you can see. And the site has also been made “responsive”, adapting to today’s smaller mobile screens:

Weblog mobile  Weblog menu

With a collapsing menu at the top, and the banner size reduced to keep the site content in plain sight even on small mobile phones. Look ma, no JavaScript: it’s all HTML5 + CSS3!

(ehm, to be precise: there’s still a little bit of inline JavaScript for the collapsing top menu)

The recent posts and articles lists have been moved to a new “Archive” page, making the front page even quicker to load than before. All site searches are now delegated to Google.

As for the old site: again, it’s all still there. The old front page is going to be renamed to “index-wp.html” - it’ll be frozen to the final WordPress state on December 31st, 2015.

Enjoy the new site. May it serve us all well for many years to come!

JeeLabs: My New Year's resolutions

$
0
0

It’s that time of year again - the last day of 2015. Another year gone by. A new one waiting in the wings. In an increasingly connected and technocratic world. Time for making plans.

Energy

For 2016, my resolution is to reduce our dependence on energy and natural resources further still. The “JeeLabs household” has been a net producer of electricity for the past two years, but there’s still progress to be made:

  1. Lower nightly average electricity consumption from 100W to 50W (including the 25W or so needed to keep the servers, LAN, WiFi, and internet connection going).

  2. Reduce gas consumption by 20%. The yearly consumption for heating and washing here is about 1600 m3. This is a fairly high value, due to the way the house is built - lots of open spaces, in a 1970-era stone building. Some options: find/fix the main heat leaks and replace the 15-year old gas heater by a new even more efficient one.

Identifying heat leaks in the house could perhaps be done with a few dozen well-calibrated temperature sensors, in combination with tracking cool-off and outside wind patterns.

Technology

These past months have been an ever-increasing “feeding frenzy” of buying stuff from eBay and AliExpress - gadgets, boards, gimmicks, chips, components - whatever. Costs are so low nowadays, that the temptation to “just order” one more thing became… irresistible?

This is insane. The “free delivery” offered with most of these purchases is not free at all of course. If nothing else, it’s hiding the slave labour and oil-guzzling transportation costs.

The pile of “stuff” here at JeeLabs is now so immense, that there’s enough for many years of gadgets piled up in various little boxes to perform experiments and explore it all.

Enough is enough. In 2016, I will not buy anything with a computer in it (neither micro- nor otherwise). No chip, no laptop, no phone, no watch, no appliance. What I have is fine!

With two exceptions: if something essential breaks, I will replace it. And if it is unavoidable for finishing the development of a product for the JeeLabs shop, I’ll buy what’s needed.

Hardware

Several projects in 2015 led to a number of interesting results. The explorations with ARM µC’s, and things like the ultra-ultra low-power Micro Power Snitch really work and deserve to be turned into a product of some kind.

In 2016, I will create - and release - a number of complete hardware products, including a small successor for the JeeNode, most probably based on an STM32 µC and the wireless RFM69 radio module - although other options are not to be ruled out yet.

New, real, tangible, substantial, useful products, for the shop. You can count on it.

Software

But new hardware such as wireless sensor nodes is not actually that exciting. In some form or other, we already have lots of options for this - especially as makers with a soldering iron to combine existing solutions. Placing yet another chip on a PCB isn’t terribly exciting.

The really hard part is creating software for it all. Not just an implementation that works, but a code foundation which spans more than a single unit. I.e. making sense of an entire collection of nodes, able to combine and extend them, and able to manage the software for all of them, with tools to track revisions in the long term.

Many of the current JeeNodes here have been operating for years on end without a hitch, collecting sensor data and sending it along to some central node. But the more time passes, the harder it seems to become to keep track of the code on each of these nodes - let along tweak them further. Cross-compilation has its limits, with source code getting lost, etc.

In 2016, I will design - and release - a working implementation of an infrastructure which interoperates with both existing and new nodes (i.e. both AVR and ARM), with ways to easily reconfigure everything, and with a very high-level approach to managing it all.

The choice of Arduino vs. ARM, of IDE vs make, of RFM69 vs WiFi, of language X vs language Y - these are no longer the proper questions to ask. A home environment, any environment over the span of a few years really, is by necessity going to be heterogenous.

Writing

Well, that’s easy: nothing changes (apart from the way these pages get created and served).

I will continue to write articles for inclusion in the Jee Book - which, BTW is long overdue for an update with all the recent pieces. And as before, 10% of all book revenues will be donated to Wikipedia - a monument of collaboration and collective knowledge. As for the other 90%: all of that goes directly into the “JeeLabs Supplies Acquisition Fund”.

It’s great fun writing about all sorts of technology related to physical computing. The mix of high-level software, embedded software, microcontrollers, numerous sensors, electronics - both digital and analog(ue) - mechanical construction… it’s an unbounded playground!

It’s been over seven years of writing for the weblog so far. Let there be many more to come.

Nathan Chantrell.net: Orvibo S20 WiFi Mains Socket with Node-RED

$
0
0
orvibo_S20
Orvibo S20 WiFi Smart Socket

For years I used X10 for all my remote controlled sockets but the unreliability eventually drove me to RF based sockets, the Home Easy ones in particular as you can’t beat the price, often available for circa £20 for a pack of 3 with a remote control and easily integrated into Node-RED and the likes with a simple 433MHz transmitter and receiver, the downside being there is no security.

Anyway, there are lots of affordable WiFi controlled sockets on the market now and a common one is the Orvibo S20, £15.99 on Amazon, a bit less from some sellers on ebay or under £11 from Banggood (be sure to select the right model for your country there).

They seem to be pretty well made and are small unobtrusive units. They are designed to work only with their proprietary Android and iPhone apps but there has been severalpieces of work done on reverse engineering the protocol and some libraries for various platforms already exist but it looked easy enough to knock something up in Node-RED so that’s what I did as it is the heart of my home automation system these days. This is just simple on/off control here, no point in messing with the built in timers when we have better control than that in Node-RED itself.

Communicating with the socket

First of all the socket needs to be configured to connect to your WiFi network, the easiest way to do this is just to install the Orvibo WiWo app and set it up there, check the socket actually works while you are at it and make a note of the sockets uid. No need to use the app again now, you can uninstall it if you like.

Communication with the socket is done via UDP broadcasts on port 10000 and first of all a packet needs to be sent to register with it, this registration times out after 5 minutes so I’m sending it before every command with a 300ms delay before the actual on/off command, less than this seemed a bit hit an miss and you may need to tweak this.

For simple on/off usage like this the packet format can be simplified to:

Subscribe: 6864001e636c[mac address]202020202020[mac address in little endian]202020202020

On/Off: 686400176463[mac address]20202020202000000000[00 for off, 01 for on]

You can grab a copy of my Node-RED flow to do that here.

Node-RED Flow
Node-RED Flow

I am using an MQTT topics of the format orvibo/0123456789ab where 0123456789ab is the MAC address of the particular socket (grab it from your router or it is also available as the first 12 digits of the uid displayed in the WiWo app) and the payload is just on or off as required. In Node-RED I then remap more readable names like appliances/heater to the relevant socket topic just as I do for the Home Easy and X10 modules.

Here is the code from the Node-RED function block:

// Get mac address from topic and place in array
msg.topic = msg.topic.replace('orvibo/','');
var mac = [];
for (var i = 0; i < msg.topic.length;) {
    mac.push("0x" + msg.topic.charAt(i) + msg.topic.charAt(i+1));
    i = i+2;
}
var padding = [0x20,0x20,0x20,0x20,0x20,0x20];
var uid = mac.concat(padding);
var uidLE = mac.reverse().concat(padding);

// Build subscribe packet
var command = [0x68,0x64,0x00,0x1e,0x63,0x6c];
var subscribe = command.concat(uid).concat(uidLE);

// Build command packet
command = [0x68,0x64,0x00,0x17,0x64,0x63];
if (msg.payload == "on") {
    var data = [0x00,0x00,0x00,0x00,0x01];
} else if (msg.payload == "off") {
    var data = [0x00,0x00,0x00,0x00,0x00];
}
var packet = command.concat(uid).concat(data);

msg = { payload: new Buffer(subscribe) };
var msg2 = { payload: new Buffer(packet) };

return [msg, msg2];

Anything else of interest?

As mentioned communication with the Orvibo sockets is over UDP so there is no guarantee that the packets arrive but it seems reliable so far and it does look like it is possible to query the status of the socket so that is something to look at as well.

It listens on port 10000 for the UDP broacast packets, I checked to see if there were any open TCP ports that might reveal something else but there was nothing open. It does connects to a couple of IP addresses though, 52.28.25.255 (an Amazon AWS instance) on port 10000 which seems to be part of the “cloud” service that allows you to use it from outside your own network as well as a (currently unreachable) Chinese IP 61.164.36.105 on port 123 which is SNTP so it could be part of the timer function but who knows what it really is. I don’t like things on my network connecting to unknown Chinese IPs and as I won’t be using the Orvibo app either I’ve blocked both of these IPs on my router and will monitor to see if it tries any others.

 

FairTradeElectronics: Streaming Your Showbox Films To Your Xbox Console

$
0
0

Xbox has become one of the state-of-the-art gaming consoles, used by many people. This has been created on the basis of fundamental 2D gaming system to the sophisticated practical gaming concept. However, with Xbox Console, people want to have an entertainment, which is different to social media and gaming. And that is possible only with the features of the Android software- ShowBox.

This Android application is organized in such a way that almost everyone can utilize the app easily and navigate quite fast. In other words, you will be able to move in the app from a particular category to the other and then stream the programs with easy clicks.

It is to be noted that this app has such a feature that can help the users to drive the movies or stream them to any type of device with the help of Chromecast or a third party casting application.

Let Xbox Console play movie or any show

Though many people know about the traits of the stated app and the streaming facility to the other device, they do not understand the way of streaming the movies. If you have also no idea about this process, then you can follow the below tutorial to stream the TV programs to your Xbox Console. First, you need to properly download the stated application to your mobile or Smartphone.

  • Then set up the streaming app
  • Next you have to search for AllCast app and install it for your gadget
  • Click on the right place to download this AllCast app from Play Store and fix it
  • Then open your ShowBox app and select any movie title to play
  • While it is completed, you need to make use of the option- Internal Player
  • Then hit on the button- Watch Now and go for AllCast to choose the streaming process
  • The application will look for the accessible tools for streaming the title.
  • Choose Xbox device and tap on its name and begin streaming the film or show

All these are the only steps for streaming a program from the app to a device. So, all guys will be easily capable of streaming any movie from the Android app to their Xbox device. Xbox 360 or One console is now no more a mere gaming console but a great source of getting the desired entertainment.

 

JeeLabs: Sh(r)edding those CDs & DVDs

$
0
0

If you’ve been around for a while, taking in the wonders of “personal computing” as it happened, then you’ll probably also have collected lots of disks with bits on them over the years. It started with floppies, from 8” to 3.5”, and then moved quickly to CD-ROMs and data DVD’s.

Piles of them. Boxes and boxes. As part of some magazine, or even being a magazine.

Here at JeeLabs there were over 500 of them, and that was after a large cleaning session about a decade ago! Plus 200 more, with personal backups burned onto them. Innumerable copies of copies in some cases, but what’s the point of trying to sort it all out if you’re not going to need it (or even miss it) in the long run?

One way to deal with this - in a very packrat-like “never throw anything away!” fashion - is to copy all those disks to a hard disk as “.iso” files. Which can then easily be mounted when needed, or used to re-create the original CD or DVD, even.

As it so happens, CD/DVD drives are becoming rare at JeeLabs, though.

But not to worry, this very first Mac Mini from 2005 was still sitting around, an old PowerPC G4 with 512 MB RAM and an 80 GB HD - unused, collected from someone who has moved on since:

A perfect CD/DVD ripper. It now runs the latest Debian 8.2 (for 32-bit PPC) and is in fact quite usable in today’s terms, with WiFi and wired ethernet both working fine - great for SSH use!

The reason for doing this is ddrescue, a wonderful tool which reads in CDs and DVDs while trying very hard to keep going despite any disk I/O errors. At the end, a log file can be saved, which describes exactly which parts are fine and which failed. Command-line use is trivial:

ddrescue -b2048 /dev/cdrom d333.iso d333.log

Here is an example log file with some read errors:

# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue -b2048 /dev/cdrom d333.iso d333.log
# Start time:   2016-01-05 01:07:28
# Current time: 2016-01-05 01:17:15
# Scraping failed blocks... (forwards)
# current_pos  current_status
0x206D4800     /
#      pos        size  status
0x00000000  0x1FDA0000  +
0x1FDA0000  0x00934800  -
0x206D4800  0x00CC8000  /
0x2139C800  0x00000800  -
0x2139D000  0x05A3D000  +

This first Mac Mini has only 2 USB ports (which were relatively new in 2005), but it does include a FireWire-400 port, and even a modem! - BBS anyone? - hmm, too retro, let’s not go there…

The rest is pretty straightforward: add 2 large USB drives, one as main work disk, the other as backup (to be stored off-line elsewhere). And then just rip away - it takes about 4 minutes to read a CD-ROM and 15 minutes to read a single-layer DVD.

Unless there are read errors, that is. Then, reading an entire disk can easily take up to an hour (or even longer with DVDs), as the drive retries and then ddrecue itself also retries. It’s quite clever, reading from the other direction (i.e. a different head motion) and reading increasingly smaller pieces to attempt to salvage as much as possible. And it can be interrupted / restarted.

Hopefully, this total re-rip of all the disks to a modern 2.5” 500G HD will last a long time. The backup HD is an old noisy 3.5” 500G HD, but that’s ok, it’s only meant to be used as last resort.

About 5% of the CD-ROMs had read errors. Many of them are around 10 years old. The lesson? Never use those gold-with-blue-green CD-ROMs again! Around 8 of 10 failed disks were of that kind. And when they failed, there were non-recoverable read errors in many places on the disk!

All the private backup disks will be cut in two, with each half disposed of as house waste at different times (CD/DVD garbage is not collected separately in the Netherlands, alas).

And that’s basically it. With 80% of those magazine CDs discarded right away, the remaining CDs and DVDs have now almost completely been transferred, and are ready for disposal. The result: about 450 GB of data, served via Samba on the network, with a backup disk kept off-site in case this server disk ever starts behaving badly (as all disks do, eventually). Goodbye CDs!

JeeLabs: Getting (a bit more) organised

FairTradeElectronics: Rice Cooker – Best Solution For Cooking Rice Perfectly

$
0
0

With the busy schedules, many people hardly get enough time to cook their food and if you are among one of those people then you must take your foot forward for purchasing few simple kitchen appliances like microwave, rice cooker etc. The modern day rice cookers have proved to be a boon for the working ladies who do not get enough time to cook different rice recipes for their family.

The rice cookers have a pan or the pot attached to them, in which rice is cooked. Generally, the material of these pots is plastic, glass or ceramic. If you are stepping forward to purchase the rice cooker, but not able to select the best one for yourself, then you can read the reliable rice cooker reviewsThese reviews are generally given by the experts thus you can rely on them when purchasing a rice cooker.

The rice cooker is quite a small electric appliance and hence can easily be placed anywhere in the kitchen. However, for easy operation, make sure that you place it where there is an electric plug in point.

The best part of the rice cookers is that they are safe to use and come loaded with a number of features like timer, steaming option etc. which makes cooking easy and enjoyable. They come with the attached handles for carrying the rice cookers which do not get heated at the time of cooking the rice.

Know what to consider while purchasing the rice cooker

  • The first and foremost thing that you need to consider is the material of the pot, lid etc. in order to purchase the durable as well as the best material.
  • Different rice cookers have different operation as they come loaded with different features. Remember to purchase the one that offers easy usage and you do not have to think twice before using the appliance.
  • Technology utilized in cooking the rice is another thing that you need to consider while purchasing the rice cooker. If you are searching the rice cooker with few of the advanced features you can opt for purchasing the fully automated rice cooker or the one that allows keeping the rice warm for a longer period of time.
  • Know the volume of rice which can be cooked in a one go in the rice cooker that you are purchasing. The capacity of rice cooker can be known in two ways, first its capacity before cooking and its capacity after cooking.
  • Since, the rice cooker works on electricity you also need to enquire about the power consumption. It is advisable to purchase the rice cooker that consumes less wattage in order to cut down the cost of your electricity bill.

Apart from the advantages of the rice cooker there are certain drawbacks also. Some of the drawbacks of the rice cookers are listed below –

  • Many of the rice cookers do not cook the brown rice evenly. In addition to this, many a times the lower layer of the rice in the cooker is gluey thus hindering the taste of the rice.
  • The cleaning of the rice cooker is another hard task as in addition to the cooking pot you are also required to clean the steaming vents, outer surface as well as the inner lid of the cooker.

 


JeeLabs: Squashing tons of source code

$
0
0

And then came internet, open source software, and public code repositories…

Never before has it been possible to access so much software, of all kinds, in any programming language, for any application, and of all qualities & complexities. Even if not of immediate use “as is”, source code is a phenomenal resource - for knowledge, development ideas, engineering tricks, clever hacks - or simply to learn what others have done and how they have done it.

At JeeLabs, there’s an archive of source code - both “unzipped” and “checked out” - which has grown to some 125 GB over the years. Over half a million files.

Much of it is unknown, will never be seen, and is perhaps even obsolete.

But that’s not the point. Having access to the code, to find things, to see what’s available and to learn from others where possible - it’s a truly fantastic resource. And it’s definitely not the same as going through everything online: local disk access is faster, you can access it via the editor and other tools you know, and hey… it’s even there when the internet link is having a bad day.

Having a large source code repository (or actually a huge set of them) at arm’s length can be very useful during development. But it’s also easy to create a big mess of it. Which version is which, where did it come from? Before you know it, you can drown in semi-identical copies…

Fortunately, most source code now lives in public repositories (git, svn, mercurial, bazaar, cvs, rcs, etc). Which means you can simply “clone” from the source, and you get as much history as you like with it, as well as README’s, docs, links to the original “repo”, and more. Extremely convenient with Git & GitHub. That’s exactly what’s being saved more and more at JeeLabs.

As mentioned before, this is really a read-only archive - for reference, browsing, searching, sometimes for re-use, and occasionally even as basis for modification or derived work.

Storing these files as is on a local disk can be a bit inconvenient:

  • it eats up space (most files are tiny, with a lot of partial-block waste)
  • it’s too easy to accidentally change things
  • it’s hard to refer to, if the archive changes regularly

For this reason, the current collection of snapshots over the past 10 years has now been turned into a highly-compressed read-only archive. To be extended once a year, perhaps.

There are many ways to do this (one way was to burn file collections to iso’s, i.e. CD-ROM images, even if not physically stored that way). Which is in fact exactly how the source code collection has been managed here, until now. But it gets messy, and worst of all: there is a huge amount of duplication, as source gets checked out again, perhaps with some changes.

But there’s a solution, which looks like it could work quite well, called SquashFS. It’s a file system with a number of very useful properties in this context:

  • the file system is created once and is then essentially read-only
  • each file is compressed, but so is most of the file system meta information
  • duplicate files are identified and only included once (“poor man’s de-duplication”)
  • no free space is wasted due to “blocking” with some fixed granularity

With lots of small source files and some occasional duplication across them, SquashFS achieved a remarkable compression: those 125 GB ended up as a single 32 GB disk file. Note that this is a lossless transformation: the exact same directory tree ends up inside SquashFS.

SquashFS requires Linux (sort of, see below). But that’s no big deal: we can simply copy that sources.sqsh archive file to a Linux setup, which could be a Raspberry Pi or an Odroid board, and it’ll do the work for us. The result can then be shared as a Samba file server volume.

Here’s how to “mount” that file in Linux on an (existing) directory, called /mnt/sources:

sudo mount /sources.sqsh /mnt/sources -t squashfs -o loop

Or, to make this happen automatically on reboot, we can add this line to /etc/fstab:

/sources.sqsh   /mnt/sources   squashfs   ro,defaults   0 0

It’s extremely simple to create such a SquashFS archive:

mksquashfs /path/to/original/sources sources.sqsh

Compression of a large set of files requires a lot of processing power (in this case an 8-core i7 running several minutes full blast). But it’s no big deal since: 1) it only needs to be done once, and 2) you don’t have to compress on the same machine as where the result will be mounted. There’s even a build of the SquashFS tools for Mac OSX (via brew install squashfs).

See the SquashFS HOWTO for further information and examples.

JeeLabs: Keeping track of lab supplies

$
0
0

There’s this nasty thing with electronics: There Are So Many Kinds Of Tiny Little Components!

If you’re into hardware and building circuits, you’ll quickly find out that not only are there so many different resistors, capacitors, semiconductors, chips, sensors, LEDs, actuators, fasteners, wires, and so on… some of those SMD parts are also minuscule! - way too easy to lose track of.

At JeeLabs, there are now about a hundred cardboard boxes filled with little plastic envelopes, mostly from DigiKey, but more and more also ordered directly from China:

Over 2,000 different components. Enough to go insane when trying to find something again!

As always, there are many ways to address this problem. There are online sites which let you manage a parts inventory, and no doubt at least as many specialised applications for that task.

At JeeLabs, for many years now, a custom-made “ShopBase” has been in use. Implemented with FileMaker Pro, a commercial but very convenient and powerful database & front-end. But powerful as it may be, it was a drag to start up each time to find one specific component.

Online solutions have the same drawback, and besides… what’s the point of placing that information in the cloud when you only need it while standing next to those lab supplies?

There’s a much simpler solution. It’s extremely fast, trivial to build, and easily adapted and extended for any particular context: text files with a Unix shell script, using standard Unix tools. All running on Mac OSX in this case, but it would work equally well on Linux, and probably also on Windows with Mingw tools, etc. It’s called “labby”, because it’s a bit like a loyal Labrador?

$ labby 
JeeLabs Lab Supplies
Usage:
  labby <text> - find the specified search text (case insensitive)
  labby w <id> - open web page for specified product id
  labby a <box> <id> - add information about an item to the specified box

Let’s say we need to use a zener diode - which ones do we have lying around?

$ labby zener
c12     DK-568-5804-1-ND  DIODE ZENER 47V 500MW DO-35    20-07-2011
c13     DK-1N5229BFSCT-ND  DIODE ZENER 4.3V 500MW DO-35    03/11/2011
c13     DK-1N5227B-TPCT-ND  DIODE ZENER 500MW 3.6V DO35    03/11/2011
c13     DK-1N5239BFSCT-ND  DIODE ZENER 9.1V 500MW DO-35    03/11/2011
c13     DK-1N5242BFSCT-ND  DIODE ZENER 12V 500MW DO-35    03/11/2011
$

That took 10 milliseconds. No application launch, no internet access. There seem to be five different versions, one in box “c12”, the rest in box “c13”. All the parts are from DigiKey, and there’s the part number in there.

Note that locations are very coarse, it only mentions the box, not where inside the box the component can be found. That’s intentional. It’s no big deal rummaging through one little box to find the item, and this way we don’t need to track and type in any extra details. It also means we can put the item (i.e. the bag) back after use, and simply stick it in the front. Each box thus ends up ordered in LRU order. No muss, no fuss.

What if we need to find out more about one of those parts? Easy:

$ labby w DK-1N5229BFSCT-ND

And up pops a web browser window, with the full details:

Here’s how it’s done: there’s an area with text files, one per storage box, containing lines of tab-separated text: product-id, description, notes, date-added. The workhorse for searching is grep - it searches all these files and reports the matches, including the file name.

Everything is implemented as a simple shell script (did you know bash supports functions?):

#!/bin/bash
cd ~/Documents/JeeLabs/Supplies

usage() {
cat <<'EOF'
JeeLabs Lab Supplies
Usage:
  labby <text> - find the specified search text (case insensitive)
  labby w <id> - open web page for specified product id
  labby a <box> <id> - add information about an item to the specified box
EOF
}

openPage() {
  case "$1" in
    DK-*) s="http://www.digikey.nl/product-search/en?keywords=${1#DK-}" ;;
    EB-*) s="http://www.ebay.com/itm/$1" ;;
    FL-*) s="http://nl.farnell.com/jsp/search/productdetail.jsp?SKU=${1#FL-}" ;;
    MR-*) s="http://nl.mouser.com/ProductDetail/${1#MR-}" ;;
    # use DigiKey if there is no supplier prefix
    *)    s="http://www.digikey.nl/product-search/en?keywords=$1" ;;
  esac
  open $s  # opens a web browser window on Mac OSX, since $s is a URL
}

addItem() {
  case "$1" in
    "") echo "Usage: labby a <box> <supplier-partid> ?<optional-notes>?"
        return ;;
  esac
  if [ ! -f data/$1 ]; then
    echo "'$1' is not an existing box, you need to create it first as 'data/$1'"
    return
  fi
  case "$2" in
    DK-*) ;;
    *) echo "Please specify a supplier and part id, e.g. 'DK-1N5239BFSCT-ND'"
       return ;;
  esac
  echo "whoops, this feature is not working yet..."
}

case "$1" in
  "") usage ;;
  w)  openPage $2 ;;
  a)  addItem $2 $3 ;;
  *)  cd ./data; grep -i "$*" * | sed -e "s/	/  /g" -e "s/:/	/" ;;
esac

This first version was whipped up in an hour, and as you can see, it’s not complete. The most important missing feature is the ability to add more items. What we’d like is to supply only the box name (i.e. location) and the product id (including supplier prefix) - then this script should perform a web request to the supplier’s site to grab the description with curl or wget, and then massage the result a bit to append as one text line to the proper box file.

For the time being, new items will have to be added manually, using a text editor.

But hey: Instant Search ! - and perhaps just as important: instant tweaking as the need arises.

Sure - this could all have been implemented with a web server, MySQL or SQLite, and some programming language to tie it all together and render the results. But this is about finding information quickly, and coming up with a convenient workflow. And there - for several decades now - The Unix Way is still hard to beat. It’s a wonderfully malleable toolbox.

JeeLabs: Arduino shields... or not

$
0
0

Once upon a time, when the Arduino was still young, someone made a mistake with the headers on its PCB, placing one of the headers off the standard 0.1” placement grid - as used by just about everything else in the electronics prototyping world.

The result is history: Arduino shields always have one header on a 0.06” offset, instead of 0.1”. The big irony is that this mistake now benefits the Arduino world, since it uniquely differentiates it from everything else.

But for prototyping, it’s still a huge mistake and nuisance. I’m tired of always having to use special prototyping boards for anything based on this mistake. So I’ve decided I’ve had enough of this nonsense, at least for my own personal use:

The two articles this week are about fixing the 0.06” header problem once and for all:

The design files will be available online, for anyone who wants to adopt this same approach.

JeeLabs: A base board for the Hy-Tiny

$
0
0

There are many Arduino “shields” out there, i.e. boards which fit on top of the Arduino Uno and compatibles. Even for one-off prototypes, it may make sense to use this form-factor, as it allows you to re-use a setup with a different µC board underneath (but watch out for 3.3V/5V issues!).

This could be convenient with STM’s low-cost “Nucleo” boards - letting you switch to a different µC if the current one runs out of steam, or doesn’t have all the needed peripherals after all. Or switch to a different line of µCs altogether. Arduino shields have become a de facto standard.

Here is a “base board” for the Hy-Tiny, as described in a recent article and as used before:

It brings out all the pins in the proper place, i.e. to support serial I/O, I2C, and SPI on the same pins as an Arduino Uno. There are even three “spare” pins, available on an extra top-left header.

The Hy-Tiny base board is 50 x 50 mm, just right for production through one of the low-cost PCB services in China. The design files can be downloaded via this link to GitHub.

As you can see, all headers are doubled-up on the inside, without that 0.06” alignment error of standard Arduino shields. This board can be used with run-off-the-mill prototype boards with a “sea of holes” on a 0.1” grid. Such as this one, which is 5 x 7 cm and has 24 x 18 solder islands:

The idea with the above Hy-Tiny base board therefore, is to select the header rows needed for a particular project and only solder those in. This approach is not new: several existing boards (e.g. the Olimexino-STM32) offer exactly the same solution for that off-grid alignment issue.

Coming up next: even more flexibility!

JeeLabs: Side-stepping the 0.06" mistake

$
0
0

As mentioned before, the one thing standing between an Arduino and convenient protoyping, is that horrid 0.06” mis-alignment of one of its headers.

No longer - meet this almost-too-trivial-for-an-article“Bridge Fifity-Fifty” (BFF) adapter:

Bridge, because that’s all it does - and Fifty Fifty, because that is its size in millimeters.

This is a simple pass-through board to convert in either direction between the Arduino shield header layout, and a layout suitable for numerous prototyping boards with a “see of holes” on a 0.1” grid, such as the one show above, which is 5 x 7 cm with 18 x 24 solderable / through-hole / copper-clad pads on it.

The BFF can be used either between an Arduino Uno (or compatible, such as STM’s Nucleo boards) and a prototype board, or the HyTiny Base Board from the previous arcticle with any 3.3V-capable Arduino Shield.

With care, a BFF could even be soldered directly to a 5x7 proto board, for a very low profile.

Or you could make your own µC board, perhaps a one-off design, put some headers on it using the standard 0.1” grid, and use this BFF board to allow placing an Arduino shield on top.

The EAGLE design files can be downloaded via this link to the “Embello” repository on GitHub.

JeeLabs: Overcoming JET lag

$
0
0

Long-time readers of this weblog know that the topics here have always been all over the map - electronics, digital design, embedded firmware, but also trying out new stuff, getting organised, re-thinking software development, and more. This week is no exception…

I’d like to revisist the design of a long-term home monitoring and automation system for use here at JeeLabs. If you’ve been keeping count, you could call it “HouseMon IV”, but I’m going to stick to the name “JET” for this project.

Here are this week’s articles, as always presented in a few day-sized chunks:

And since a picture is often worth more than a thousand words, here is one:

This diagram was made early 2015 for the intial README page of JET, and the basic design of the system now being presented really hasn’t changed that much.

JeeLabs: JET, as seen from 9,144 meters

$
0
0

The JET project name is an acronyn for “JeeLabs Embello Toolkit”. And with that out of the way, please forget it again and never look back. A few more acronyms will be explained when the subsystems are introduced - none of them particularly fancy or catchy - it’s just because we can’t move forward without naming stuff. Names and terms will be introduced in bold.

As will become clear, this is a long-term project, but not necessarily a big one: this is not about creating a huge system. It’s more about making it last…

This has implications for the choices made, but even more for the choices left out, i.e. the options being kept open for future extension.

First off: JET is neither about picking a single favourite language or tool and imposing it on everything, nor about limiting the platform on which it can be used. Any such choices made today would look stale a few years from now.

But evidently we do need to pick something to work with. These choices will be made based on requirements as well as current availability and stability. With luck, we can avoid having to combine a dozen different languages & tools. But choices made for one part of the system won’t impose restrictions on the rest. How this is possible will become clear in the upcoming articles.

So what is JET?

  • JET is an architecture, in the sense that it covers a whole set of systems: a central machine, all sorts of remote “nodes” for sensing the state of the home, environmental data, nodes which control lights, appliances, curtains, etc, and a variety of controls and displays, from switches to mobile screens

  • JET is like a city - pieces come, pieces go, everything evolves as needed, and most importantly: the design embraces variation - old and new - it’s all about being able to use a heterogenous mix of technology, because over the years hardware and software is bound to change, again and again

  • JET is a conceptual framework, allowing us to implement a real working setup, without having to constantly revisit the choices made so far - there needs to be some level of consistency for different parts to be designed and built over time, in such a way that everything continues to inter-operate nicely

Some technical design requirements which will greatly affect the choices made:

  • the core of JET is always on: it needs to run 24 hours a day, 7 days a week
  • the (realistically) expected lifetime of this core must be at least 10 years
  • it has to run on low-cost commodity hardware, e.g. Raspberry Pi, Odroid, etc.
  • JET must be extensible enough to connect with just about anything out there
  • there has to be a good way to evolve or migrate from one setup to the next
  • all admin tasks must go through authenticated access to the central system

Here are a few more requirements, some of which are perhaps less conventional:

  • the system must be self-diagnosing and self-healing, wherever possible
  • it must be possible to run in unattended mode without any user interface
  • web access is optional, with possibly more than one server setup in parallel
  • support for reduced-functionality mode when the central machine is down

One could almost compare this to designing for a car or a comms exchange…

Or to put it in a somewhat different perspective: JET is for real use on Earth, by real humans, but its core has to be written as if it will run on Mars, in a very unforgiving environment!


JeeLabs: Ongoing software development

$
0
0

Ok, with those (somewhat vaguely-defined) requirements down, how do we even start to move in this direction?

Well, we’re going to have to make a number of decisions. We will need to build at least one program / app, of course, and we’re going to need to run it on some hardware. In a product setting (which this is not!), the steps would involve one or more people planning, designing, implementing, and testing the product-to-be. Iteration, waterfalls, agility, that sort of thing.

JET is different. It’s almost more about the process itself than about the resulting system, because 1) the specs are going to be really fluid (we’ll get new ideas as the project grows and evolves), 2) the timeline is wide open (we’er trying to accomplish something, but the moment we do, we’ll come up with more things to do/add/try/fix/replace), and 3) there is no hard deadline.

Different as it may be, there is a good reason for doing things this way: let’s not constrain what is possible, and leave the door open for unpredictable new ideas, devices, and demands. A home should be a fluid and organic living space. The last thing we want, is to turn our home into an industrial design task. JET will be much better off in permanent research & exploration mode.

And then there’s history: are you going to throw out everything you have and replace it with new technology, just because you could? Is everyone going to dump their fridge because there is now a shiny new “IoT model” for sale? (apart from even wanting such a thing in the first place…)

We need a different model. Some people like to mess with their houses, and are always tinkering with it (even if perhaps not a perfect choice for every spouse…). We need a setup which works yet can evolve while being in use. We don’t want to distinguish production from dev-mode.

If you’re familiar with the Erlang programming language, then you’ll know how some aspects of its design makes it eminently suitable for such a task: in Erlang, every piece of code can be replaced without restarting the system (compare that to going through a “minor” Windows update!). Erlang was designed at Ericsson for its telephone exchanges, i.e. always-on hardware.

But we don’t have to adopt Erlang, a functional programming language which can be quite a challenge to learn (and perhaps also a bit heavy for a Raspberry Pi). What we can do, is to design a minimal core process, in such a way that it doesn’t need to be stopped and restarted during development. There’s a little chicken-and-egg issue here, since obviously we will have to build, and test, and restart that core process first. But the trick is to make the core agnostic about our particular application domain: if the core contains only general-purpose code which has next to nothing to do with JET’s home monitoring and automation, then it also won’t need to change as we start augmenting it with features.

Let’s try and work this out in some more detail:

  • JET/Hub is the main core process to rule all the others
  • it is launched once and then never ever quits or runs out of memory
  • it launches everything else as one or more child processes
  • it supervises these processes, taking action when they exit or fail
  • it provides some communication mechanism(s) between itself and the rest
  • it may contain some more key (but generic) functionality, such as logging
  • it may include a (generic and robust!) database, just to simplify the system
  • it may also include a (generic and robust!) web server, again to simplify

With JET/Hub running, even before we have any monitoring or automation code, we can then start to design and implement one or more child processes, which will be called JET Packs. The implementation language for these packs need not be the same as the hub, all that matters is that the starting and stopping adheres to a few conventions, and that all communication with the hub is well defined and fully compatible accross every pack that will ever be created.

(This diagrams is slightly dated, but still matches most of the current design)

So how will the hub be implemented then? Answer #1: you couldn’t care less. Answer #2: using a language which is very easy to port and install, and which is robust and able to handle messaging extremely well. Answer #3: using the programming language Go, with MQTT for “pub-sub” messaging, Bolt as key-value store database, and Go’s standard net/http package as web server.

JET/Hub has been implemented in an hour, and can be found in the dev branch on GitHub. Well… that’s a bit of an exaggeration: this is mostly a prototype for the actual code. Things haven’t been tied together yet, making the code so far next to useless. We will need a way to make it all work together and there’s no supervisor logic to manage the JET packs yet. But it’s safe to say that the entire hub can probably be built with under 1,000 lines of custom Go code. Which is not too bad for such a key architectural component of JET!

With the hub running (forever, on Mars, remember?), we still need a way to set up packs, add them to the mix, replace them, and make them do interesting stuff. But apart from these basic requests (over MQTT), there is nothing else we need in the hub to be able to start thinking about the real task at hand: designing and implementing features as a JET pack, using some language.

Some interesting properties emerge out of this approach:

  • we can run packs on our own local machine during development (even if that means the hub can’t supervise and launch them for us) - as far as the actual operation is concerned, there will be no difference and all messaging will take place via socket-based MQTT, the same as other packs

  • there is no preferred programming language - the only thing which matters is the protocol and semantics of all the message exchanges over MQTT

  • authentication can be enforced via MQTT / HTTPS sessions and SSL certificates

  • a different MQTT implementation can be used, if the hub’s one is not desired

  • other services from the hub, i.e. web servers and database storage also need not be used, if not needed or if alternatives are preferred: it’s all optional

So in a way there is no core - there are only conventions: MQTT and choices for its topic names and message formats. If Go or any of the packages used in the hub were to vanish or stop being supported, we can look for alternatives as needed and replace the entire hub with a different implementation. This would not affect the overall architecture of JET.

We have now defined a central infrastructure, yet we haven’t really made any limiting choices. Which was exactly the purpose of this whole exercise…

JeeLabs: Architecture: it's all about data!

$
0
0

Code vs. data… there are many ways to go into this oh so important dichotomy.

Here is a famous quote from a famous book, now over four decades old (as you can tell from its somewhat different terminology, i.e. flowcharts<=> code, tables<=> data):

Show me your flowcharts, and conceal your tables, and I shall continue to be mystified; show me your tables and I won’t usually need your flowcharts: they’ll be obvious.– Fred Brooks, “The Mythical Man Month”

Data really tends to be the most important aspect of a long-term design process, because:

  • code matters while our program is executing, data is what stays around when it is not
  • code is what we invent and produce to deal with a task, data is what comes in as facts
  • code evolves as we better understand a task, data needs to be saved and kept intact

Very often, software development is like a constant shuffle: we write code, we run it, it collects, generates, and transforms some data, we save that data to disk, we stop the code and replace it with an improved version, and then the whole process starts over again. We’re continuously alternating between run mode (with the code frozen) and stop mode (with the data frozen):

There are clearly exceptions to this view of the software development process: when we store HTML pages as data, it really is part of the software design, in the same way that our code is.

But the model breaks down with JET, which needs to be running 24 hours a day, 7 days a week. As far as the hub is concerned, there is no stop mode. We don’t want to lose incoming data.

This means the design of the central data structures and formats must be frozen from day one. Of course we’ll need to be able to add, change, and remove any data flowing through the system, but its shape and semantics should be fixed, as far as the logic and code in the hub is concerned.

This is not as hard as it may seem. The hub is a switchboard. There is very little data which it needs to understand. If it can collect data, pass it around, and save it, it won’t care what that data is. And that’s where MQTT’s “pub/sub” and Bolt’s “key/value” concepts make things easy:

  • there are topics (a plain text string, with slashes and some minor conventions)
  • these topics determine the routing of incoming and outgoing messages
  • and there are values (message “payloads” in MQTT terminology)
  • for MQTT, the mechanism is called publish-subscribe, or “pub/sub” in short
  • for Bolt, the topic is the (hierarchical) key under which a value is stored on disk
  • the values can be anything and can often be treated as an opaque collection of bytes

The only exceptions are the messages which control the behaviour and operation of the hub itself. These need to be specified early on, and frozen - hopefully in such a way that all further changes can remain 100% backwards-compatible. Again, this is not necesssarily a very hard requirement to meet: if we start off with a truly minimally-viable set of special hub messages, then every subsequent change can be about adding new message conventions for the hub.

Adding message types, formats, rules, and semantics to a running system is far less intrusive than changing what is already in use. Even if the first hub can only pass messages as-is through MQTT and not save them in Bolt, quite a few features in JET can be tried and built already. As we figure out the best messaging design for this, we can start by implementing this in a separate JET Pack before messing with the hub. This can be done on our development machine, as a pack which includes Bolt and connects to the rest of the system like any other pack: over MQTT.

With respect to data formats, one more design decision will be imposed: the values / payloads which need to be processed and understood by the hub will use JSON formatting. It may end up getting used in lots of places, but that’s not a hard requirement as far as the hub is concerned.

Messaging is the heart of JET (i.e. data) - not logic or processing (code) !

What about sensors, actuators, and tying into the physical world? - Same story, really: we can implement it first as a separate pack, and then choose to move that functionality into the hub, if it works well, is super-robust, and if it simplifies the flow and structure of the entire setup.

What about the front-end then, the web server which lets us see what’s going on in our house, control appliances, and define automation rules? - Again: we can start with a separate pack.

You might recognise some concepts from an old project at JeeLabs, called JeeBus - and many of the design aspects of JET are indeed similar, even if based on different technology which didn’t even exist at the time. It’ll be interesting to see how this approach plays out this time around.

As an architecture, JET embraces decoupled development, because this will allow the city-like properties mentioned in the intial requirement specs. If JET is about evolving software over a long time span, then it has to be able to evolve from a tiny nucleus (the hub) right from the start.

In a nutshell: JET/Hub is the place where data thrives - JET Packs are where code thrives.

JeeLabs: What's in a hub?

$
0
0

The restart of the JET project is progressing nicely. This week’s episode is about installing a first version as a basic-yet-functional new core system and describing / documenting some of the central features being built into the hub.

So here goes, one story per day, about what JET/Hub is all about:

Yes, that’s a lot of articles for one week. Because there’s a lot to describe!

Here’s another little diagram, this one is from 2014 - even older than last week’s:

And yet it’s still mostly applicable to JET …

JeeLabs: An introduction to JET/Hub

$
0
0

This is the start of a series describing “JET v4”, and in particular the “hub” subsystem. JET is a system which is intended to bring together all the home monitoring and automation tasks here at JeeLabs, for the next 10 years…

As mentioned before - and as its name indicates - “JET/Hub” is the centrepiece of this architecture. It’s the switchboard which sits between the various (and changing) components of the system, including: from serial ports, bridges to Wireless Sensor Networks, and directly-attached I/O hardware, to software-based functionality, e.g. real-time data collection of sensor readings, sending out commands to control lights, heating, and appliances in the house, calculating trends and other statistics, presenting this information in a web browser. And eventually also the management of potentially large rule sets to guard and even control certain actions in and around the house. The hub is where everything passes through, it’s also the autonomous “always-on” part of JET.

That’s quite a mouthful. In a nutshell: JET is for home monitoring and automation, whereby JET/Hub takes care of orchestrating the entire system.

To expand a bit on an earlier set of requirements, the hub should:

  • make minimal demands on the hardware and run well on a small Linux board
  • allow continuous development on a “live” system, without constant restarts
  • be flexible enough to support a very wide range of home sensors and actuators
  • make installation very straightforward, and likewise for subsequent updates
  • support remote management and avoid the need to log in and enter commands

Yet Another HAS

There are many Home Automation Systems. Even at JeeLabs there have been several major iterations: JeeMon/JeeBus (Tcl), HouseMon 0.7 (Node.js, with “Briqs”), HouseMon 0.8 (Node.js, using “Primus”), and HouseMon 0.9 (with Go, using “Flow”). But hey, that’s just the JeeLabs code - there are tons (hundreds?) of other OSS automation projects on GitHub alone.

The reason for JET is really that nothing else seems to fit the (perhaps not totally conventional) requirements here at JeeLabs:

  • it should be extremely lightweight, to run on a small Linux system with very low energy demands (50W for an always-on system still wastes 430 KWh each year)
  • it should support continuous development, since the system needs to remain useful - and usable - for at least a decade, and preferably well beyond that
  • it should make few assumptions in the core about technology, since the needs and available solutions are bound to change drastically over time

JET is not about making the flashiest choices today - it’s about picking a limited set of design guidelines and adopting “minimally viable” conventions. And based on that: implementing a small core to keep around for a long time.

JET’s design choices

It’s time to cut through the vague hand-waving fog, and make some hard choices:

  • all subsystems of JET will run as separate processes: 1 “hub” and N “packs”
  • the hub stays running 24 / 7, and manages the lifetimes of all the JET packs
  • all communication between hub and packs goes through an MQTT server (broker)
  • MQTT’s “topics” lend themselves well to designing a clear naming hierarchy
  • the payload of messages going through MQTT should be JSON in most cases

The MQTT server of choice right now is Mosquitto, which is open source, highly standardised, and well-tested. Furthermore, it scales well and it’s widely available on all major platforms.

The hub subsystem is implemented in the Go language, which is also open source, portable, in active development yet very robust, and extremely well-suited for network-oriented applications. Being statically compiled (yet supporting flexible dynamic typing) means that the hub code can be built and installed as executable with zero external package dependencies.

The major functions included in the hub (as opposed to being implemented in JET packs) are:

  • simple communication with the MQTT server/broker
  • connecting to serial ports (incl. USB) to capture data and emit commands
  • a built-in fast and scalable key-value data store
  • a built-in web server with websockets, for efficient web browser access
  • installing / removing JET packs, and a way of starting and stopping them
  • registration and discovery services to let packs work together
  • a robust upgrade mechanism for packs, as well as for the hub itself
  • supervising all running packs, with configurable automatic restart-on-failure
  • watchdogs to detect system anomalies and report / act upon them
  • basic system and error logging to stdout/stderr
  • everything else is configurable, and can evolve massively over time…
  • as long as the hub can launch it, a pack can be built in any way you like

A brief attempt to include the MQTT broker inside the hub as well has been abandoned for now, since the SurgeMQ package is not quite ready for prime time. For now, JET will rely on both the hub and the MQTT broker running alongside each other as separate processes.

Oh, and by the way: this will be called “JET version 4.0” … gotta start somewhere, right?

JeeLabs: Connecting to serial ports

$
0
0

The main mechanisms for communicating between devices around the house in JET are via serial communication and via Ethernet (LAN or WLAN) - often in the form of USB devices acting as virtual serial interfaces. For simple radio modules used for a Wireless Sensor Network, we can then use a USB or WiFi “bridge”.

Other (less common) options are: I2C and SPI hardware interfaces, and direct I/O via either digital or analog “pins”. For these special-purpose interfaces, there is the WiringPi library, which is often also ported to other boards than the Raspberry Pi for which it was originally conceived. In this case, a small C program can be used as bridge to MQTT.

Network connections are simple in Linux, and in particular in Go, and will not be covered here right now. Besides, with MQTT in the mix, network access is essentially solved if the other end knows how to act as MQTT client. Getting data into JET via the network is easy - even if some massaging is needed, it can be done later by including some custom code for it in a JET pack.

Serial ports are a different story. There are many serial conventions in use, with all sorts of settings that you need to get just right to send messages across: baudrate, parity, stop bits, h/w or s/w handshake - it can be quite a puzzle. Just enter the man stty command on Linux to see how many different configuration choices have been added over time…

Fortunately, a few common serial interface conventions will cover the majority of today’s cases, when it comes to “hooking up” a serial device such as an USB-to-serial “dongle”. And Linux tends to have excellent support out of the box for all the different brands and vendors out there. All we need is some glue code in the hub, and we should be able to get serial data in and out.

And yet that’s just step one in this story. Welcome to the “interfacing to the real world” puzzle!

We also need to match the serial data-format choices of the device. Is it text? Is each line one message? Or if it’s binary data: how do we know where a message ends and the next one begins? There are so many different “framing” and other protocol conventions, that this is probably best handled in a custom pack - at least for more complex cases. But even then we need a serial driver which is able to pass all the information faithfully across to that JET pack.

Another area of concern is with the I/O pins other than the send and receive lines: do we need to connect any of the RTS, CTS, DTR, DSR pins? Do they need to be in a certain state?

Eventually, many of these use cases will need to be addressed. For now, let’s just focus on a basic subset and aim for the following scenario:

  • plugging in a serial USB adapter, or an Arduino / JeeLink based on one
  • opening and closing a specific serial port on request, via MQTT
  • being able to receive plain text, line by line, and to send arbitrary text
  • adjusting the state of the DTR and RTS pins, for reset / upload control
  • configuring standard baud rates, from at least 4,800 to 230,400 baud
  • inserting brief delays of a few milliseconds, up to perhaps a few seconds

What about the actual serial data I/O?

This is where MQTT’s pub-sub can help a lot: we can subscribe to a fixed/known topic for each interface, and pass each incoming message to the serial port. The advantage over plain serial, is that any number of processes can do so - if more than one send at the same time, the output will get inter-mixed, but that’s fine as long as each message is a self-contained outbound “packet”.

On the incoming side, there are two uses cases:

  1. pick up each message and pass in along to anyone interested - this is the most natural mode for MQTT and matches exactly with its pub-sub semantics

  2. briefly claim the output while a client “takes control”, which it then relinquishes once done - this is useful for an “uploader”: switch the attached device to firmware-upgrade mode, send new firmware, after which normal operation resumes, with all listeners getting new incoming data

Here is a design, currently being implemented, which supports both modes:

  • each serial interface listens to a fixed topic, i.e. the driver for interface “xyz” subscribes to serial/xyz to receive all incoming requests and data
  • if the message is a JSON object (i.e. {...}), it is treated as a new serial port open request
  • if the message is a JSON array (i.e. [...]), then it’s a list of interface change requests
  • a JSON string is parsed (with escapes properly replaced) and sent as is
  • everything else will probably be treated as an error or be ignored

Serial port open requests

Request format, in JavaScript notation:

{
  device: <path>            // the serial device to open
  sendto: <topic>           // the topic to forward incoming data to
  init: [<commands...>]     // initialisation commands and settings
}

Example (JSON):

{"device":"/dev/ttyUSB0","init":["-dtr","%57600"],"sendto":"logger/USB0"}

New open requests implicitly close the serial port first, if previously open.

To close the serial port and not re-open it, we can send an empty message. This is not valid JSON, but will be recognised as a special “close-only” request.

Serial interface change requests

Interface change requests are inside a JSON array and get processed in order of appearance:

  • "%<number>" - set the interface to the specified baud rate
  • "+dtr" - assert the ¬DTR line (i.e. a low “0” logical level)
  • "-dtr" - de-assert the ¬DTR line (i.e. a high “1” logical level)
  • "+rts" - assert the ¬RTS line (i.e. a low “0” logical level)
  • "-rts" - de-assert the ¬RTS line (i.e. a high “1” logical level)
  • "=text" - send some text as-is, for use between the other requests
  • <number> - delay for the specified number of milliseconds (1..1000)

(note: “¬” denotes a pin with inverted logic: “1” is inactive, “0” is acive!)

Additional requests could be added later, e.g. to switch between text and binary mode, to set a receive timeout, and to encode/decode hex, base64, etc.

Power-up behaviour

While the above is sufficient to use serial ports, it does not address what happens on power-up or after a system restart. Ah, but wait… meet MQTT’s “RETAIN” flag:

  • when the RETAIN flag is set in a message, a copy of the message is stored in the MQTT server, and a duplicate is automatically sent whenever a matching subscription is set up

  • by setting RETAIN on the serial port-open request, we indicate that this request is to persist across reboots of the system

  • only one RETAIN message is kept (the last one), it overwrites any older one

  • an empty message with the RETAIN flag set removes the last one from the server

In other words: to configure our system, we merely need to send the proper open requests for each serial port once - with the RETAIN flag set, i.e. the MQTT server now acts as persistent configuration store for each of these settings.

Other messages, without the RETAIN flag set, pass through as is - they won’t affect the storage of prior retained messages. Normal outbound data should therefore be published without the RETAIN flag. Likewise for interface change requests: they must be processed, but not stored.

To permanently open a serial port in a different way, we can simply send a new open request message, with RETAIN set, and replace the previous one.

Incoming data

The open request includes a sendto: field, which specifies the topic where every incoming message is sent. In the initial implementation, data is expected to come in line-by-line, and each line will be re-published to the given topic.

By using open requests without the RETAIN flag, we can play tricks and briefly re-open a serial port for a special case, with a different sendto value. Then, once ready, we simply re-open again with the original topic, and data will start getting dispatched as before.

As already mentioned, the above mechanisms are currently being implemented and will be included in the hub once the software is working and stable. For real code, see GitHub.

Viewing all 328 articles
Browse latest View live