Quantcast
Channel: OpenEnergyMonitor aggregator
Viewing all articles
Browse latest Browse all 328

JeeLabs: Centralised node management

$
0
0

There are a number of ways to avoid the long-term mess just described. One is to make the target environments aware of source code. For example by running a Basic / Forth / JavaScript / whatever interpreter on them - then we can work on the target environment as with a regular development platform: the source stays on the target, and whenever we come back to it, we can view that code and edit it, knowing that it is exactly was has been running on the node before.

There are some serious drawbacks with this source-on-the-target approach:

  • it requires a fairly hefty µC, able to not only run our code, but also to act as an editor and mini development environment - bit it is quite effective, and probably a reason why BASIC became one of the mainstream languages decades ago, even in “serious” industrial & laboratory settings

  • you end up with the worst of all worlds, by today’s standards: a target µC, struggling to emulate a large system, and a very crude editing context

  • if that target machine breaks, you lose the code - there is no backup

  • no revision control, no easy code sharing with other nodes, no history

Trying to turn a remote node into a “mini big system” may not be such a great idea after all, in the context of one-off development at home with a bunch of remote nodes: it really is too risky, especially for the tinkering and experimentation that comes with physical computing projects.

Some recent developments in this area, such as Espruino and MicroPython, do try to mitigate the drawbacks by offering a front end which keeps the source code local - but then you end up back in square one: with a potential disconnect between what’s currently running on each node and the source code associated with it, and stored on the central/big development setup.

Another option, which takes some discipline, is to become very good at taking snapshots of your development environment setup, and in particular at taking notes of which build ended up where. With proper procedures, everything becomes traceable, recoverable, and repeatable.

The problem with it: discipline? notes? backups? constantly? for hobby projects? … yeah, right!

To re-iterate: the central problem is that development happens in a different context than actual use - embedded µCs can’t come anywhere near the many very convenient capabilities of modern development environments, with their fancy programmer’s editors, elaborate IDEs, revision control systems, cross- compiler toolchains, debuggers, and uploaders.

The issue here is not that our development tools are lacking. The problem is that they tend to be used in a node-by-node “fire and forget” development style, which doesn’t help with the entire (evolving) home collection of nodes and gadgets. Which node was compiled how again?

The best we can probably do is to aim for maximum automation, and to focus all development in a single spot - not just on a node-by-node basis, but for the entire network and collection of devices we’re gradually setting up. And not just for one node type, or even one vendor’s products, but for everything we’re tying together, from home-grown one-off concotions to commercially obtained ready-to-use devices and gadgets.

If all design and development takes place in one place, and if all results are pushed out to the remote node “periphery” in a semi-automated way, then we may stand a chance of being able to re-use our work and re-generate new revisions in the same way at a (much) later date. Whereby “one place” doesn’t imply always developing on the same machine (that too, is bound to evolve after all) - we just need to have remote access to that “one place”, the fixed point of it all.

In the longer term, i.e. a decade or more, there is no point trying to find a single tool or setup for all this. Technology changes too fast, and besides: we’re much too keen on trying out the latest new fad / trick / language / gadget. We really need to approach this all with a heterogenous set of technologies in mind. The goal is not one “perfect” choice, but a broad approach to keeping track of everything over longer periods of time. Much longer than our attention span for any specific new node we’re adding to our home-monitorin/-automation mix.

Maybe it’s time to treat our hobby as a “multi-project”: lots of ideas, lots of experimentation, hopefully lots of concrete working parts, but by necessity it’ll also be a bit like herding cats: alternative / unfinished designs, outdated technologies alongside with shiny new ones, and lots of loose ends, some actively worked on, some abandoned, some mature and “in production”.

In terms of keeping things organised to avoid the predictable mess described in the previous article, there really is no other sane option than to at least track the entire home-monitoring and home-automation project in one place. And there’s a fairly simple way to make this practical: simply add a web- server on top, which allows browsing through all the files in the project. It can be password-protected if needed, but the key point is that a single area somewhere needs to represent the state of our entire“multi-project”.

How do we get there? Some options come to mind: we could add a web server on the same machine as where our home server is running (JET or whatever), and make sure that all the related code, tools, documentation, and design notes live there. We could turn that entire area into one massive Git repository, and even keep a remote master copy somewhere (on GitHub, why not?). Note that this is not really about sharing, it’s merely a way to keep track of what is inevitably going to be a unique and highly personal setup. And if putting it in public view doesn’t feel right, then of course you shouldn’t be placing your copy on Github. Put it in a personal cloud folder instead, or keep it on a server within your own house (you do have a robust backup strategy in place, right?). The main point is: treat your hobby setup as if it were an “offcial” project, because it’s even more important to create a durable structure for such a unique and evolving configuration, than with public open-source stuff which is going to be replicated all over the place anyway.

As you can see, this isn’t about “the” solution, or “the” technology. There is no single one. In a way, it’s about the greater context of “sustainable tinkering”, thinking about where your projects and hobbies will take you (and your family members) ten years from now. You’re probably not doing all this to become a “sysadmin for your own house”, right?

What we need to do, is design and implement “in the open”, so that we can go back and tweak / fix / improve things later, possibly many years later, when all the neat ideas and builds will be fond memories, but their details long-forgotten. Note that “in the open” does not imply “in public”, it may well be open to an audience of just one person: you. What “open design” actually means here, is: resumable design.

Keep in mind that this is a long-term, small-scale, personal, bursty, hobby-mode context. Life is too short to allow it to turn into a long-term mess - yet that seems to be exactly what happens a lot, well… at least here at JeeLabs. It’s time to face up to it, and to try to avoid these problems.

From this perspective, this hobby may become a whole different ball game. Tools which could come in handy include Hugo, to easily manage notes (ignore all the flashy “themes”), and Gogs, to set up a personal git repository browser. Heck… taking notes, documenting your ideas and progress, and tracking the evolution of your own designs over time could actually be fun!


Viewing all articles
Browse latest Browse all 328

Trending Articles