Quantcast
Channel: OpenEnergyMonitor aggregator
Viewing all 328 articles
Browse latest View live

FairTradeElectronics: Rediscover Your Music With the Sennheiser HD700 digital headphones

$
0
0

Music is only as good as the device you use to listen to it. There are a number of devices that allow you to experience music. One of these devices is the headphones. These are a pair of large electronic speakers that are mounted on a frame that goes over the skull and cups over the ears. There are many headphone brands in the world. Each one professes to make the highest quality headphones. Of them all none is more luxurious, flamboyant and downright effective like the Sennheiser HD700 dynamic stereo headphones.

Introducing Sennheiser

These are an ultra modern pair of headphones. They are open circumaural headphones. This means that the headphone cups have a ventilated back casing. The cups feature a mesh construction that is beautiful to look at and also allows for full, transparent sound. Due to this type of cup construction, these headphones are able to produce warm and balanced music. The Sennheiser headphones are chock full of technology. The speakers in the cups are fitted with ventilated magnets. Thus, when you are listening to music using these headphones, you do not experience distortion of the music by air flowing around the cups. In addition to that, the acoustic casing of these headphones is angled. As a result, they provide superb projection of the music and the notes sound natural.

Sennheiser made use of some advanced drivers in these headphones. Not only are they modern and chic, they can produce high pressure sound. These drivers also respond with a flat frequency. This means that your music has absolutely no distortions. To boost the convenience of using these headphones, Sennheiser made the connector cable of these headphones completely detachable. The cable is made up four wires of silver-plated copper. It is also completely free of oxygen. Thus, it can conduct the music from your devices better even when it is being played at a high frequency.

Technology meets design

The Sennheiser HD700 headphones are a beauty to look at. They have a space age mesh on the outer parts of the cups. The mesh is built to express the industrial processes that make these headphones a reality. The headband on these headphones is coated in silicone. In addition to that, the headband has a dual material yoke. This is aesthetically very pleasing. In addition to that, the cups on the headphones have a soft velour padding. This padding goes all around your ears. This makes the music you are listening to sound crystal clear. Despite the large cups and strong headband frame, these headphones are super light. This is due to the aerospace nature of the materials used to construct these headphones.

Convenient construction

Few headphones in the world are built with convenience in mind. First of all, the cable can be removed and stored separately. You can also upgrade it in case yours gets worn out. The cable actually has an indent in the rubber casing. This indent assists you to insert the cable in the headphone casing at the cups. The braided nature of the cable helps it to survive the wear and tear of day to day activities. Moreover, it shows that this is a pair of very high quality headphones. For


JeeLabs: Low-power mode :)

$
0
0

First, an announcement:

Starting now, the JeeLabs Weblog is entering “low-power mode” for the summer period. What this means: one weblog post every Wednesday, but no additional articles.

While in low-power mode, I’ll continue to write about fun stuff, things I care about, bump into, and come up with – and maybe even some progress reports about some projects I’m working on. But no daily article episodes, and hence no new material for the Jee Book:

Jeebook cover

Speaking of which: The Jee Book has been updated with all the articles from the weblog for this year so far and has tripled in size as a result. Only very light editing has been applied at this point – but in case you want it all in a single convenient e-book, there ya go!

Have a great, inspiring, and relaxing summer! (cq winter, if you’re down under)

(For comments, visit the forum area)

mharizanov: Some thoughts on security in terms of IoT

$
0
0


Connected devices and sensors are the fastest growing sources of data. Billions of records are being generated daily around the globe and data transported across networks to be consumed where needed. Security of data in transit or at still in that context (especially when it comes to sensitive personal data) is quite important.

Back in 2011 Fitbit portal exposed on the Internet thousands of records of intimate activities of their subscribers. How did that affect the company’s reputation and customer trust? Damage is done to the whole IoT industry, not just the company that allowed that to happen.

As customers we want our data encrypted during transport, stored safely and kept private. The collation of multiple points of data can quickly become personal information as events are reviewed in the context of location, time, recurrence, etc. The regular purchase of different food types, for example, may reveal religion or ongoing health concerns. Health records, location details, energy use patterns and so forth private data can easily be used to reconstruct in great detail one’s life. This data is therefore naturally of interest to many. Governments, insurance companies, marketing/advertising agencies and certainly criminals are after it.

Unauthorized access to data isn’t the only problem. Connected devices are designed to be remotely controllable. With surprisingly many consumers relying on default product security credentials, it is strikingly easy to gain control of connected appliances. Dynamic DNS services are a honeypot to those hunting for connected devices. One could easily end up with a spying thermostat or a fridge that sends spam or someone remotely controlling your smart home.

What makes it so hard to get security right? Design flaws, implementation flaws and mismanagement  are often the source of vulnerabilities. šSystems are “adequately secure” only relative to a perceived threat. šAbsence of obvious insecurities is not a good indication that a system is adequately secure. Users also have a good share in decreased security by not updating firmware, using poor passwords, inadequate fire-walling. “Security” through obscurity is very common and seems that folks still believe in it.

Overall security concerns are at the top of the list of barriers to the IoT adoption, with consumers awareness in this area on the rise. Any IoT business model must adequately address these concerns in order to be successful and sustainable.

  Page views: 1323

mharizanov: Tweeting silicon

$
0
0

Here is a fun project for I did couple days ago: a tweeting ESP8266. The typical approach when dealing with such task (and probably the better) is to use a proxy service like ThingSpeak’s ThingTweetPushingBox or by building proxy yourself with Node-RED or mqttwarn. It is much more fun though to perform a direct tweet from a microcontroller. Just think about the complexity of using TLS and oAuth along with Twitter’s API, all that done from a $5 Wi-Fi connected SoC..

Getting this done was pretty much straight forward except for the hmac-sha1 hashing of the oAuth base string. Turned out that the hmac-sha1 implementation in ESP8266’s SDK only supports keys of size less the block size (64 bytes), while Twitter needs much larger key for the signature. Wikipedia’s article on hmac-sha1 hashing explains the approach to that case and I was able to generate a correct hash with one additional SHA1 applied to the key to reduce it to 20 bytes.

if (length(key) > blocksize) then
        key = hash(key) // keys longer than blocksize are shortenedend if

So my ESP8266 can now  tweet, I created a dedicated Twitter account for it named “Tweeting silicon” @tweetingsilicon

tweeting_silicon

Code for the project is available on Github. Code is dirty, please contribute to improve it.

 

 

  Page views: 1060

mharizanov: DDoS attack on my blog

$
0
0

An intensive Distributed Denial of Service attack is currently undergoing on my blog, with HTTP request rates hitting thousands per minute. It all started few days ago when I received a message from my hosting service provider stating that my blog’s shared hosting has massive CPU/Bandwidth usage. The folks offered a solution for me that was “upgrade to a higher plan so you can meet the DDoS traffic”.. I was speechless and had to take things under my control.

Looking at the stats, the attack start moment is also obvious:

DDoS

During the peak of the attack my blog was mostly down.

Victims to DDoS often ask themselves the same questions, “why” and “why me”. Generally for this type of attacks the “why” boils down to these cases

  • blackmailing for money
  • blocking competition
  • attempting censorship
  • mischief

I can’t imagine my humble personal blog falls into any of the first three categories and believe it is someone just being playful.

The attack is on-going (as I write these lines) from numerous IP addresses all over the world, but fortunately for me it is being executed in a not very smart way, so I was able to pick up a trend and catch these via a smart updating .htaccess “deny from” rule. I can’t give out much details on the exact measures as the attacker could be reading this and adjust.

The result from this blocking is obvious from the above graphs and my blog is up again.

  Page views: 820

mharizanov: Free signed SSL certificate for my blog

$
0
0

As soon as I blogged about IoT and security few weeks ago, my blog got hit by a massive DDoS attack combined with daily hack attempts via WordPress’ backend and SSH. I’m dealing with both issues pretty successfully for now, but this is a reminder to myself to step-up on security. I’ve since then switched hosting this blog to a $5/month VPS on DigitalOcean, really happy with the service so far.

As mentioned in my blog post on IoT security, one of the items to address is security of data in transit. My connected things use plain HTTP posts to an Emoncms instance running on my domain, meaning they are vulnerable to man-in-the-middle attacks. Probably the best way to handle this is to use a SSL certificate and handle HTTP over TLS (HTTPS). SSL certificates cost > $60 per year, it isn’t worth for a personal blog.

Fortunately startSSL.com offers completely free verified SSL certificates that you can use on your website. I followed DigitalOcean‘s instructions and ~15 minutes later I had it up and running:

ssl_cert

 

Not bad, should have done that long ago.

P.S. If you sign up for Digital Ocean VPS via this link, you will get $10 credit and I’ll benefit some too.

 

 

 

  Page views: 79

JeeLabs: Greasing the “make” cycle on Mac

$
0
0

I’m regularly on the lookout for ways to optimise my software development workflow. Anything related to editing, building, testing, uploading – if I can shave off a little time, or better still, find a way to automate more and get tasks into my muscle memory, I’m game.

It may not be everyone’s favourite, but I keep coming back to vim as my editor of choice, or more specifically the GUI-aware MacVim (when I’m not working on some Linux system).

And some tasks need to be really fast. Simply Because I Do Them All The Time.

Such as running “make”.

So I have a keyboard shortcut in vim which saves all the changes and runs make in a shell window. For quite some time, I’ve used the Vim Tmux Navigator for this. But that’s not optimal: you need to have tmux running locally, you need to type in the session, window, and pane to send the make command to (once after every vim restart), and things … break at times (occasional long delays, wrong tmux pane, etc). Unreliable automation is awful.

Time to look for a better solution. Using as few “moving parts” as possible, because the more components take part in these custom automation tricks, the sooner they’ll break.

The following is what I came up with, and it works really well for me:

  • hitting “,m” (i.e. “<leader>m“) initiates a make, without leaving my vim context
  • there needs to be a “Terminal” app running, with a window named “⌘1″ open
  • it will receive this command line:  clear; date; make $MAKING

So all I have to do is leave that terminal window open – set to the proper directory. I can move around at will in vim or MacVim, run any number of them, and “,m” will run “make”.

By temporarily setting a value in the “MAKING” shell variable, I can alter the make target. This can be changed as needed, and I can also change the make directory as needed.

The magic incantation for vim is this one line, added to the ~/.vimrc config file:

    nnoremap <leader>m :wa<cr>:silent !makeit<cr>

In my ~/bin/ directory, I have a shell script called “makeit” with the following contents:

    exec osascript >/dev/null <<EOF
        tell application "Terminal"
            repeat with w in windows
                if name of w ends with "⌘1" then
                    do script "clear; date; make $MAKING" in w
                end if
            end repeat
        end tell
    EOF

The looping is needed to always find the proper window. Note that the Terminal app must be configured to include the “⌘«N»” command shortcut in each window title.

This all works out of the box with no dependency on any app, tool, or utility – other than what is present in a vanilla Mac OSX installation. Should be easy to adapt to other editors.

It can also be used from the command line: just type “makeit”.

That’s all there is to it. A very simple and clean convention to remember and get used to!

FairTradeElectronics: How To Take Care Of Your Parrot

$
0
0

Parrots are birds that are kept by many people as pets. These birds are known to have impressed human beings for many centuries. Parrots were kept by many ancient people starting from kings, warlords, pirates and even the common people. The birds are admired because of their colorful feathers, their high levels of intelligence and their talking ability. There are very many types of parrots and different types have their own style and personality that is very peculiar from the rest. Different types of parrots love to eat certain types of food and they love to live in a different unique manner compared to other types. If you own a parrot you should take care of the bird wisely and maintain it very well to make it have an enjoyable life in your household. Some of the breeds of parrots include; The Colorful Macaw, The Comedic Cockatoo and The Majestic African Grey. You can make your parrot very happy, playful and healthy by purchasing the most perfect parrot cage for your parrot. Different types of parrots love to live in certain specific types of cages. The cage of your parrot should have a good space that will give the bird enough room to play and exercise-since parrots are birds that are playful and they love to have regular exercise. The cage should give room for your parrot to be able to stretch its wings; the cage should have enough space that will give the parrot enough room when feeding. The cage should also give the pilot enough space to play with its toys and also preen its feathers. Most of the parrot cages are manufactured with “play-top” that enables the parrot to play and others have an additional pullout tray beneath the parrot cage that enables you to collect the litter easily.

Some pet shops sell parrot cages that you will be required to assemble before putting your parrot in the cage. You will be supplied with a manual of how to assemble the cage. You will be required to assemble the “bottom stand” first and finish by putting the perches and the feeders into their position. For you to have a healthy parrot you will always be required to clean the parrot cage very well. You will be required to replace the cage liners daily, you will also be required to wipe away all the food-leftovers and the waste daily and you will also be required to wash the food and water dishes daily. You will also be required to thoroughly clean the perches and the toys at least once per week. A thorough cleaning should for the entire parrot cage should be done at least once in every month. Sometimes you will be required to dismantle the whole cage and wash every part very well then you will re-assemble the cage again.

Different types of parrot cages can be purchased in many stores and pet shops. You can also purchase the parrot cages from the online pet supermarkets.

You should take care of your parrot and maintain it very well for the bird to be always happy and attractive, you should feed it well, spray it, clean it and you will be assured of a very good pet.


JeeLabs: RFM69s, OOK, and antennas

$
0
0

Recently, Frank @ SevenWatt has been doing a lot of very interesting work on getting the most out of the RFM69 wireless radio modules.

His main interest is in figuring out how to receive weak OOK signals from a variety of sensors in and around the house. So first, you’ll need to extract the OOK information – it turns out that there are several ways to do this, and when you get it right, the bit patterns that come out snap into very clear-cut 0/1 groups – which can then be decoded:

FS20 histo 32768bps

Another interesting bit of research went into comparing different boards and builds to see how the setups affect reception. The good news is that the RFM69 is fairly consistent (no extreme variations between different modules).

Then, with plenty of data collection skills and tools at hand, Frank has been investigating the effect of different antennas on reception quality – which is a combination of getting the strongest signal and the lowest “noise floor”, i.e. the level of background noise that every receiver has to deal with. Here are the different antenna setups being evaluated:

RFM69 three antennas 750x410

Last but not least, is an article about decoding packets from the ELV Cost Control with an RFM69 and some clever tricks. These units report power consumption every 5 seconds:

ELVCostControl

Each of these articles is worth a good read, and yes… the choice of antenna geometry, its build accuracy, the quality of cabling, and the distance to the µC … they all do matter!

JeeLabs: Forth on a DIP

$
0
0

In a recent article, I mentioned the Forth language and the Mecrisp implementation, which includes a series of builds for ARM chips. As it turns out, the mecrisp-stellaris-... archive on the download page includes a ready-to-run build for the 28-pin DIP LPC1114 µC, which I happened to have lying around:

DSC 5132

It doesn’t take much to get this chip powered and connected through a modified BUB (set to 3.3V!) so it can be flashed with the Mecrisp firmware. Once that is done, you end up with a pretty impressive Forth implementation, with half of flash memory free for user code.

First thing I tried was to connect to it and list out all the commands it knows – known as “words” in Forth parlance, and listed by entering “words” + return:

$ lpc21isp -termonly -control x /dev/tty.usbserial-AH01A0EG 115200 0
lpc21isp version 1.97
Terminal started (press Escape to abort)

Mecrisp-Stellaris 2.1.3 with M0 core for LPC1114FN28 by Matthias Koch
words words
--- Mecrisp-Stellaris Core ---

2dup 2drop 2swap 2nip 2over 2tuck 2rot 2-rot 2>r 2r> 2r@ 2rdrop d2/ d2*
dshr dshl dabs dnegate d- d+ s>d um* m* ud* udm* */ */mod u*/ u*/mod um/mod
m/mod ud/mod d/mod d/ f* f/ 2!  2@ du< du> d< d> d0< d0= d<> d= sp@ sp!
rp@ rp!  dup drop ?dup swap nip over tuck rot -rot pick depth rdepth >r r>
r@ rdrop rpick true false and bic or xor not clz shr shl ror rol rshift
arshift lshift 0= 0<> 0< >= <= < > u>= u<= u< u> <> = min max umax umin
move fill @ !  +!  h@ h!  h+!  c@ c!  c+!  bis!  bic!  xor!  bit@ hbis!
hbic!  hxor!  hbit@ cbis!  cbic!  cxor!  cbit@ cell+ cells flash-khz
16flash!  eraseflash initflash hflash!  flushflash + - 1- 1+ 2- 2+ negate
abs u/mod /mod mod / * 2* 2/ even base binary decimal hex hook-emit
hook-key hook-emit?  hook-key?  hook-pause emit key emit?  key?  pause
serial-emit serial-key serial-emit?  serial-key?  cexpect accept tib >in
current-source setsource source query compare cr bl space spaces [char]
char ( \ ." c" s" count ctype type hex.  h.s u.s .s words registerliteral,
call, literal, create does> <builds ['] ' postpone inline, ret, exit
recurse state ] [ : ; execute immediate inline compileonly 0-foldable
1-foldable 2-foldable 3-foldable 4-foldable 5-foldable 6-foldable
7-foldable constant 2constant smudge setflags align aligned align4,
align16, h, , ><, string, allot compiletoram?  compiletoram compiletoflash
(create) variable 2variable nvariable buffer: dictionarystart
dictionarynext skipstring find cjump, jump, here flashvar-here then else if
repeat while until again begin k j i leave unloop +loop loop do ?do case
?of of endof endcase token parse digit number .digit hold hold< sign #> f#S
f# #S # <# f.  f.n ud.  d.  u.  .  evaluate interpret hook-quit quit eint?
eint dint ipsr nop unhandled reset irq-systick irq-fault irq-collection
irq-adc irq-i2c irq-uart

--- Flash Dictionary --- ok.

That’s over 300 standard Forth words, including all the usual suspects (I’ve shortened the above to only show their names, as Mecrisp actually lists these words one per line).

Here’s a simple way of making it do something– adding 1 and 2 and printing the result:

  • type “1 2 + .” plus return, and it types ” 3 ok.” back at you

Let’s define a new “hello” word:

: hello ." Hello world!" ;  ok.

We’ve extended the system! We can now type hello, and guess what comes out:

hello Hello world! ok.
----- + <CR>

Note the confusing output: we typed “hello” + a carriage return, and the system executed our definition of hello and printed the greeting right after it. Forth is highly interactive!

Here’s another definition, of a new word called “count-up”:

: count-up 0 do i . loop ;  ok.

It takes one argument on the stack, so we can call it as follows:

5 count-up 0 1 2 3 4  ok.

Again, keep in mind that the ” 0 1 2 3 4 ok.” was printed out, not typed in. We’ve defined a loop which prints increasing numbers. But what if we forget to provide an argument?

count-up 0 1 2 [...] Stack underflow

Whoops. Not so good: stack underflow was properly detected, but not before the loop actually ran and printed out a bunch of numbers (how many depends on what value happened to be in memory). Luckily, a µC is easily reset!

Permanent code

This post isn’t meant to be an introduction to Mecrisp (or Forth), you’ll have to read other documentation for that. But one feature is worth exploring: the ability to interactively store code in flash memory and set up the system so it runs that code on power up. Here’s how:

compiletoflash  ok.
: count-up 0 do i . loop ;  ok.
: init 10 count-up ;  ok.

In a nutshell: 1) we instruct the system to permanently add new definitions to its own flash memory from now on, 2) we define the count-up word as before, and 3) we (re-)define the special init word which Mecrisp Forth will automatically run for us when it starts up.

Let’s try it, we’ll reset the µC and see what it types out:

$ lpc21isp -termonly -control x /dev/tty.usbserial-AH01A0EG 115200 0
lpc21isp version 1.97
Terminal started (press Escape to abort)

Mecrisp-Stellaris 2.1.3 with M0 core for LPC1114FN28 by Matthias Koch
0 1 2 3 4 5 6 7 8 9 

Bingo! Our new code has been saved in flash memory, and starts running the moment the LPC1114 chip comes out of reset. Note that we can get rid of it again with “eraseflash“.

As you can see, it would be possible to write a full-blown application in Mecrisp Forth and end up with a standalone µC chip which then works as instructed every time it powers up.

Speed

Forth code runs surprisingly fast. Here is a delay loop which does nothing:

: delay 0 do loop ;  ok.

And this code:

10000000 delay  ok.

… takes about 3.5 seconds before printing out the final “ok.” prompt. That’s some 3 million iterations per second. Not too shabby, if you consider that the LPC1114 runs at 12 MHz!

JeeLabs: Could a coin cell be enough?

$
0
0

To state the obvious: small wireless sensor nodes should be small and wireless. Doh.

That means battery-powered. But batteries run out. So we also want these nodes to last a while. How long? Well, if every node lasts a year, and there are a bunch of them around the house, we’ll need to replace (or recharge) some battery somewhere several times a year.

Not good.

The easy way out is a fat battery: either a decent-capacity LiPo battery pack or say three AA cells in series to provide us with a 3.6 .. 4.5V supply (depending on battery type).

But large batteries can be ugly and distracting – even a single AA battery is large when placed in plain sight on a wall in the living room, for example.

So… how far could we go on a coin cell?

Let’s define the arena a bit first, there are many types of coin cells. The smallest ones of a few mm diameter for hearing aids have only a few dozen mAh of energy at most, which is not enough as you will see shortly. Here some coin cell examples, from Wikipedia:

Coin cells

The most common coin cell is the CR2032– 20 mm diameter, 3.2 mm thick. It is listed here as having a capacity of about 200 mAh:

A really fat one is the CR2477– 24 mm diameter, 7.7 mm thick – and has a whopping 1000 mAh of capacity. It’s far less common than the CR2032, though.

These coin cells supply about 3.0V, but that voltage varies: it can be up to 3.6V unloaded (i.e. when the µC is asleep), down to 2.0V when nearly discharged. This is usually fine with today’s µCs, but we need to be careful with all the other components, and if we’re doing analog stuff then these variations can in some cases really throw a wrench into our project.

Then there are the AAA and AA batteries of 1.2 .. 1.5V each, so we’ll need at least two and sometimes even three of them to make our circuits work across their lifetimes. An AAA cell of 10.5×44.5 mm has about 800..1200 mAh, whereas an AA cell of 14.5×50.5 mm has 1800..2700 mAh of energy. Note that this value doesn’t increase when placed in series!

CR2032

Let’s see how far we could get with a CR2032 coin cell powering a µC + radio + sensors:

  • one year is 365 x 24 – 8,760 hours
  • one CR2032 coin cell can supply 200 mAh of energy
  • it will last one year if we draw under 23 µA on average
  • it will last two years if we draw under 11 µA on average
  • it will last four years if we draw under 5 µA on average
  • it will last ten years if we draw under 2 µA on average

An LPC8xx in deep sleep mode with its low-power wake-up timer kept running will draw about 1.1 µA when properly set up. The RFM69 draws 0.1 µA in sleep mode. That leaves us roughly a 10 µA margin for all attached sensors if we want to achieve a 2-year battery life.

This is doable. Many simple sensors for temperature, humidity, and pressure can be made to consume no more than a few µA in sleep mode. Or if they consume too much, we could tie their power supply pin to an output pin on the µC and completely remove power from them. This requires an extra I/O pin, and we’ll probably need to wait a bit longer for the chip to be ready if we have to power it up every time. No big deal – usually.

A motion sensor based on passive infrared detection (PIR) draws 60..300 µA however, so that would severely reduce the battery lifetime. Turning it off is not an option, since these sensors need about a minute to stabilise before they can be used.

Note that even a 1 MΩ resistor has a non-negligible 3 µA of constant current consumption. With ultra low-power sensor nodes, every part of the circuit needs to be carefully designed! Sometimes, unexpected consequences can have a substantial impact on battery life, such as grease, dust, or dirt accumulating on an openly exposed PCB over the years…

Door switch

What about sensing the closure of a mechanical switch?

In that case, we can in fact put the µC into deep power down without running the wake-up timer, and let the wake-up pin bring it back to life. Now, power consumption will drop to a fraction of a microamp, and battery life of the coin cell can be increased to over a decade.

Alternately, we could use a contact-less solution, in the form of a Hall effect sensor and a small magnet. No wear, and probably easier to install and hide out of sight somewhere.

The Seiko S-5712 series, for example, draws 1..4 µA when operated at low duty cycle (measuring 5 times per second should be more than enough for a door/window sensor). Its output could be used to wake up the µC, just as with a mechanical switch. Now we’re in the 5 µA ballpark, i.e. about 4 years on a CR2032 coin cell. Quite usable!

It can pay off to carefully review all possible options – for example, if we were to instead use a reed relay as door sensor, we might well end up with the best of both worlds: total shut-off via mechanical switching, yet reliable contact-less activation via a small magnet.

What about the radio

The RFM69 draws from 15 to 45 mA when transmitting a packet. Yet I’m not including this in the above calculations, for good reason:

  • it’s only transmitting for a few milliseconds
  • … and probably less than once every few minutes, on average
  • this means its duty cycle can stay well under 0.001%
  • which translates to less than 0.5 µA – again: on average

Transmitting a short packet only every so often is virtually free in terms of energy requirements. It’s a hefty burst, but it simply doesn’t amount to much – literally!

Conclusion

Aiming for wireless sensor nodes which never need to listen to incoming RF packets, and only send out brief ones very rarely, we can see that a coin cell such as the common CR2032 will be able to support nodes for several years. Assuming that the design of both hardware and software was properly done, of course.

And if the CR2032 doesn’t cut it – there’s always the CR2477 option to help us further.

JeeLabs: Doodling with decoders

$
0
0

With plenty of sensor nodes here at JeeLabs, I’ve been exploring and doodling a bit, to see how MQTT could fit into this. As expected, it’s all very simple and easy to do.

The first task at hand is to take all those “OK …” lines coming out of a JeeLink running RF12demo, and push them into MQTT. Here’s a quick solution, using Python for a change:

import serial
import paho.mqtt.client as mqtt

def on_connect(client, userdata, flags, rc):
    print("Connected with result code "+str(rc))
    #client.subscribe("#")

def on_message(client, userdata, msg):
    # TODO pick up outgoing commands and send them to serial
    print(msg.topic+""+str(msg.payload))

client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message

client.connect("localhost", 1883, 60) # TODO reconnect as needed
client.loop_start()

ser = serial.Serial('/dev/ttyUSB1', 57600)

while True:
    # read incoming lines and split on whitespace
    items = ser.readline().split()
    # only process lines starting with "OK"
    if len(items) > 1 and items[0] == 'OK':
        # convert each item string to an int
        bytes = [int(i) for i in items[1:]]
        # construct the MQTT topic to publish to
        topic = 'raw/rf12/868-5/' + str(bytes[0])
        # convert incoming bytes to a single hex string
        hexStr = ''.join(format(i, '02x') for i in bytes)
        # the payload has 4 extra prefix bytes and is a JSON string
        payload = '"00000010' + hexStr + '"'
        # publish the incoming message
        client.publish(topic, payload) #, retain=True)
        # debugging                                                         
        print topic, '=', hexStr

Trivial stuff, once you install this MQTT library. Here is a selection of the messages getting published to MQTT – these are for a bunch of nodes running radioBlip and radioBlip2:

raw/rf12/868-5/3 "0000000000038d09090082666a"
raw/rf12/868-5/3 "0000000000038e09090082666a"
raw/rf12/868-5/3 "0000000000038f090900826666"

What needs to be done next, is to decode these to more meaningful results.

Due to the way MQTT works, we can perform this task in a separate process – so here’s a second Python script to do just that. Note that it subscribes and publishes to MQTT:

import binascii, json, struct, time
import paho.mqtt.client as mqtt

# raw/rf12/868-5/3 "0000000000030f230400"
# raw/rf12/868-5/3 "0000000000033c09090082666a"

# avoid having to use "obj['blah']", can use "obj.blah" instead
# see end of http://stackoverflow.com/questions/4984647
C = type('type_C', (object,), {})

client = mqtt.Client()

def millis():
    return int(time.time() * 1000)

def on_connect(client, userdata, flags, rc):
    print("Connected with result code "+str(rc))
    client.subscribe("raw/#")

def batt_decoder(o, raw):
    o.tag = 'BATT-0'
    if len(raw) >= 10:
        o.ping = struct.unpack('<I', raw[6:10])[0]
        if len(raw) >= 13:
            o.tag = 'BATT-%d' % (ord(raw[10]) & 0x7F)
            o.vpre = 50 + ord(raw[11])
            if ord(raw[10]) >= 0x80:
                o.vbatt = o.vpre * ord(raw[12]) / 255
            elif ord(raw[12]) != 0:
                o.vpost = 50 + ord(raw[12])
        return True

def on_message(client, userdata, msg):
    o = C();
    o.time = millis()
    o.node = msg.topic[4:]
    raw = binascii.unhexlify(msg.payload[1:-1])
    if msg.topic == "raw/rf12/868-5/3" and batt_decoder(o, raw):
        #print o.__dict__
        out = json.dumps(o.__dict__, separators=(',',':'))
        client.publish('sensor/' + o.tag, out) #, retain=True)

client.on_connect = on_connect
client.on_message = on_message

client.connect("localhost", 1883, 60)
client.loop_forever()

Here is what gets published, as a result of the above three “raw/…” messages:

sensor/BATT-2 {"node":"rf12/868-5/3","ping":592269,
    "vpre":152,"tag":"BATT-2","vbatt":63,"time":1435856290589}
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592270,
    "vpre":152,"tag":"BATT-2","vbatt":63,"time":1435856354579}
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592271,
    "vpre":152,"tag":"BATT-2","vbatt":60,"time":1435856418569}

So now, the incoming data has been turned into meaningful readings: it’s a node called “BATT-2″, the readings come in roughly every 64 seconds (as expected), and the received counter value is indeed incrementing with each new packet.

Using a dynamic scripting language such as Python (or Lua, or JavaScript) has the advantage that it will remain very simple to extend this decoding logic at any time.

But don’t get me wrong: this is just an exploration – it won’t scale well as it is. We really should deal with decoding logic as data, i.e. manage the set of decoders and their use by different nodes in a database. Perhaps even tie each node to a decoder pulled from GitHub?

mharizanov: My own cloud based version control tool

$
0
0

There is no second opinion about the importance of version control, it is a must-have for any software project. The option for reversibility, concurrency and history of code edits is what makes it so crucial. I’ve been using mostly GitHub for the purpose, it seems the version control tool of choice for many these days. I have a number of personal projects that I want to keep private though, and with GitHub that is a premium option that costs $7/month. These are dollars well spent, however since I already have a cloud hosted VM, I figured I just use it for my own cloud version control tool. Being spoiled by GitHub’s intuitive web UI, I didn’t want to go for a raw git tool and was rather looking for a minimal change in experience.

My research ended up with me choosing GitLab CE, a powerful tool pretty much like GitHub, but with the ability to run off my own VM. I already run a $5/month 512MB RAM VPS at DigitalOcean, this blog runs off there. Adding GitLab to that machine is a bit of stretch as that is the absolute minimum required configuration. I added a swap file to be able to handle the increased resource demand. Since this will be a private repo,I don’t expect much added load anyway, it will be just used now and then by yours truly only.

My next challenge was to decide on the installation type, GitLab runs on Nginx and PostgreSQL, I didn’t want to mess up my current Apache+MySQL setup. Docker was the natural choice of tool to create isolated environment, and with the incredibly well implemented GitLab Docker image by Sameer Naik I got my own private cloud version control tool few minutes later.

To harden up on security, especially following my recent DDoS/hack attempt woes, I opened the GitLab HTTP port to be visible to only certain IP addresses that I use.

Performance is just excellent, I was worried to run at minimum specs, but obviously my setup is quite low on load and all works smoothly.

 

  Page views: 48

JeeLabs: Lessons from history

$
0
0

(No, not the kind of history lessons we all got treated to in school…)

What I’d like to talk about, is how to deal with sensor readings over time. As described in last week’s post, there’s the “raw” data:

raw/rf12/868-5/3 "0000000000038d09090082666a"
raw/rf12/868-5/3 "0000000000038e09090082666a"
raw/rf12/868-5/3 "0000000000038f090900826666"

… and there’s the decoded data, i.e. in this case:

sensor/BATT-2 {"node":"rf12/868-5/3","ping":592269,
    "vpre":152,"tag":"BATT-2","vbatt":63,"time":1435856290589}
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592270,
    "vpre":152,"tag":"BATT-2","vbatt":63,"time":1435856354579}
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592271,
    "vpre":152,"tag":"BATT-2","vbatt":60,"time":1435856418569}

In both cases, we’re in fact dealing with a series of readings over time. This aspect tends to get lost a bit when using MQTT, since each new reading is sent to the same topic, replacing the previous data. MQTT is (and should be) 100% real-time, but blissfully unaware of time.

The raw data is valuable information, because everything else derives from it. This is why in HouseMon I stored each entry as timestamped text in a logfile. With proper care, the raw data can be an excellent way to “replay” all received data, whether after a major database or other system failure, or to import all the data into a new software application.

So much for the raw data, and keeping a historical archive of it all – which is good practice, IMO. I’ve been saving raw data for some 8 years now. It requires relatively little storage when saved as daily text files and gzip-compressed: about 180 Mb/year nowadays.

Now let’s look a bit more at that decoded sensor data…

When working on HouseMon, I noticed that it’s incredibly useful to have access to both the latest value and the previous value. In the case of these “BATT-*” nodes, for example, having both allows us to determine the elapsed time since the previous reception (using the “time” field), or to check whether any packets have been missed (using the “ping” counter).

With readings of cumulative or aggregating values, the previous reading is in fact essential to be able to calculate an instantaneous rate (think: gas and electricity meters).

In the past, I implemented this by having each entry store a previous and a latest value (and time stamp), but with MQTT we could actually simplify this considerably.

The trick is to use MQTT’s brilliant “RETAIN” flag:

  • in each published sensor message, we set the RETAIN flag to true
  • this causes the MQTT broker (server) to permanently store that message
  • when a new client connects, it will get all saved messages re-sent to it the moment it subscribes to a corresponding topic (or wildcard topic)
  • such re-sent messages are flagged, and can be recognised as such by the client, to distinguish them from genuinely new real-time messages
  • in a way, retained message handling is a bit like a store-and-forward mechanism
  • … but do keep in mind that only the last message for each topic is retained

What’s the point? Ah, glad you asked :)

In MQTT, a RETAINed message is one which can very gracefully deal with client connects and disconnects: a client need not be connected or subscribed at the time such a message is published. With RETAIN, the client will receive the message the moment it connects and subscribes, even if this is after the time of publication.

In other words: RETAIN flags a message as representing the latest state for that topic.

The best example is perhaps a switch which can be either ON or OFF: whenever the switch is flipped we publish either “ON” or “OFF” to topic “my/switch”. What if the user interface app is not running at the time? When it comes online, it would be very useful to know the last published value, and by setting the RETAIN flag we make sure it’ll be sent right away.

The collection of RETAINed messages can also be viewed as a simple key-value database.

For an excellent series of posts about MQTT, see this index page from HiveMQ.

But I digress – back to the history aspect of all this…

If every “sensor/…” topic has its RETAIN flag set, then we’ll receive all the last-known states the moment we connect and subscribe as MQTT client. We can then immediately save these in memory, as “previous” values.

Now, whenever a new value comes in:

  • we have the previous value available
  • we can do whatever we need to do in our application
  • when done, we overwrite the saved previous value with the new one

So in memory, our applications will have access to the previous data, but we don’t have to deal with this aspect in the MQTT broker – it remains totally ignorant of this mechanism. It simply collects messages, and pushes them to apps interested in them: pure pub-sub!

JeeLabs: A feel for numbers

$
0
0

It’s often really hard to get a meaningful sense what numbers mean– especially huge ones.

What is a terabyte? A billion euro? A megawatt? Or a thousand people, even?

I recently got our yearly gas bill, and saw that our consumption was about 1600 m3– roughly the same as last year. We’ve insulated the house, we keep the thermostat set fairly low (19°C), and there is little more we can do – at least in terms of low-hanging fruit. Since the house has an open stairway to the top floors, it’s not easy to keep the heat localised.

But what does such a gas consumption figure mean?

For one, those 1600 m3/y are roughly 30,000 m3 in the next twenty years, which comes to about €20,000, assuming Dutch gas prices will stay the same (a big “if”, obviously).

That 30,000 m3 sounds like a huge amount of gas, for just two people to be burning up.

Then again, a volume of 31 x 31 x 31 m sounds a lot less ridiculous, doesn’t it?

Now let’s tackle it from another angle, using the Wolfram Alpha“computational knowledge engine”, which is a really astonishing free service on the internet, as you’ll see.

How much gas is estimated to be left on this planet? Wolfram Alpha has the answer:

Screen Shot 2015 08 18 at 11 36 14

How many people are there in the world?

Screen Shot 2015 08 18 at 11 39 09

Ok, let’s assume we give everyone today an equal amount of those gas reserves:

Screen Shot 2015 08 18 at 11 44 02

Which means that we will reach our “allowance” (for 2) 30 years from now. Now that is a number I can grasp. It does mean that in 30 years or so it’ll all be gone. Totally. Gone.

I don’t think our children and all future generations will be very pleased with this…

Oh, and for the geeks in us: note how incredibly easy it is to get at some numerical facts, and how accurately and easily Wolfram Alpha handles all the unit conversions. We now live in a world where the well-off western part of the internet-connected crowd has instant and free access to all the knowledge we’ve ammassed (Wikipedia + Google + Wolfram Alpha).

Facts are no longer something you have to learn – just pick up your phone / tablet / laptop!

But let’s not stop at this gloomy result. Here’s another, more satisfying, calculation using figures from an interesting UK site, called Electropedia (thanks, Ard!):

[…] the total Sun’s power it intercepted by the Earth is 1.740×10^17 Watts

When accounting for the earth’s rotation, seasonal and climatic effects, this boils down to:

[…] the actual power reaching the ground generally averages less than 200 Watts per square meter

Aha, that’s a figure I can relate to again, unlike the “10^17″ metric in the total above.

Let’s google for “heat energy radiated by one person”, which leads to this page, and on it:

As I recall, a typical healthy adult human generates in the neighborhood of 90 watts.

Interesting. Now an average adult’s calorie intake of 2400 kcal/day translates to 2.8 kWh. Note how this nicely matches up (at least roughly): 2.8 kWh/day is 116 watt, continuously. So yes, since we humans just burn stuff, it’s bound to end up as mostly heat, right?

But there is more to be said about the total solar energy reaching our little blue planet:

Integrating this power over the whole year the total solar energy received by the earth will be: 25,400 TW X 24 X 365 = 222,504,000 TeraWatthours (TWh)

Yuck, those incomprehensible units again. Luckily, Electropedia continues, and says:

[…] the available solar energy is over 10,056 times the world’s consumption. The solar energy must of course be converted into electrical energy, but even with a low conversion efficiency of only 10% the available energy will be 22,250,400 TWh or over a thousand times the consumption.

That sounds promising: we “just” need to harvest it, and end all fossil fuel consumption.

And to finish it off, here’s a simple calculation which also very much surprised me:

  • take a world population of 7.13 billion people (2013 figures, but good enough)
  • place each person on his/her own square meter
  • put everyone together in one spot (tight, but hey, the subway is a lot tighter!)
  • what you end up, is of course 7.13 billion square meters, i.e. 7,130,000,000 m3
  • sounds like a lot? how about an area of 70 by 100 km? (1/6th of the Netherlands)

Then, googling again, I found out that 71% of the surface of our planet is water.

And with a little more help from Wolfram Alpha, I get this result:

Screen Shot 2015 08 18 at 14 18 41

That’s 144 x 144 meters per person, for everyone on this planet. Although not every spot is inhabitable, of course. But at least these are figures I can fit into my head and grasp!

Now if only I could understand why we can’t solve this human tragedy. Maths won’t help.


JeeLabs: Clojure and ClojureScript

$
0
0

I’m in awe. There’s a (family of) programming languages which solves everything. Really.

  • it works on the JVM, V8, and CLR, and it interoperates with what already exists
  • it’s efficient, it’s dynamic, and it has parallelism built in (threaded or cooperative)
  • it’s so malleable, that any sort of DSL can trivially be created on top of it

As this fella says at this very point in his videoState. You’re doing it wrong.

I’ve been going about programming in the wrong way for decades (as a side note: the Tcl language did get it right, up to a point, despite some other troublesome shortcomings).

The language I’m talking about re-uses the best of what’s out there, and even embraces it. All the existing libraries in JavaScript can be used when running in the browser or in Node.js, and similarly for Java or C# when running in those contexts. The VM’s, as I already mentioned also get reused, which means that decades of research and optimisation are taken advantage of.

There’s even an experimental version of this (family of) programming languages for Go, so there again, it becomes possible to add this approach to whetever already exists out there, or is being introduced now or in the future.

Due to the universal reach of JavaScript these days, on browsers, servers, and even on some embedded platforms, that really has most interest to me, so what I’ve been putting my teeth into recently is “ClojureScript”, which specifically targets JavaScript.

Let me point out that ClojureScript is not another “pre-processor” like CoffeScript.

“State. You’re doing it wrong.”

As Rich Hickey, who spoke those words in the above video quickly adds: “which is ok, because I was doing it wrong too”. We all took a wrong turn a few decades ago.

The functional programming (FP) people got it right… Haskell, ML, that sort of thing.

Or rather: they saw the risks and went to a place where few people could follow (monads?).

FP is for geniuses

What Clojure and ClojureScript do, is to bring a sane level of FP into the mix, with “immutable persistent datastructures”, which makes it all very practical and far easier to build with and reason about. Code is a transformation: take stuff, do things with it, and return derived / modified / updated / whatever results. But don’t change the input data.

Why does this matter?

Let’s look at a recent project taking the world by storm: React, yet another library for building user interfaces (in the browser and on mobile). The difference with AngularJS is the conceptual simplicity. To borrow another image from a similar approach in CycleJS:

Screen Shot 2015 08 16 at 16 08 35

Things happen in a loop: the computer shows stuff on the screen, the user responds, and the computer updates its state. In a talk by CycleJS author Andre Staltz, he actually goes so far as treat the user as a function: screen in, key+mouse actions out. Interesting concept!

Think about it:

  • facts are stored on the disk, somewhere on a network, etc
  • a program is launched which presents (some of it) on the screen
  • the user interface leads us, the monkeys, to respond and type and click
  • the program interprets these as intentions to store / change something
  • it sends out stuff to the network, writes changes to disk (perhaps via a database)
  • these changes lead to changes to what’s shown on-screen, and the cycle repeats

Even something as trivial as scrolling down is a change to a scroll position, which translates to a different part of a list or page being shown on the screen. We’ve been mixing up the view side of things (what gets shown) with the state (some would say “model”) side, which in this case is the scroll position – a simple number. The moment you take them apart, the view becomes nothing more than a function of that value. New value -> new view. Simple.

Nowhere in this story is there a requirement to tie state into the logic. It didn’t really help that object orientation (OO) taught us to always combine and even hide state inside logic.

Yet I (we?) have been programming with variables which remember / change and loops which iterate and increment, all my life. Because that’s how programming works, right?

Wrong. This model leads to madness. Untraceable, undebuggable, untestable, unverifiable.

In a way, Test-Driven-Design (TDD) shows us just how messy it got: we need to explicitly compare what a known input leads to with the expected outcome. Which is great, but writing code which is testable becomes a nightmare when there is state everywhere. So we invented “mocks” and “spies” and what-have-you-not, to be able to isolate that state again.

What if everything we implemented in code were easily reducible to small steps which cleanly compose into larger units? Each step being a function which takes one or more values as state and produces results as new values? Without side-effects or state variables?

Then again, purely functional programming with no side-effects at all is silly in a way: if there are zero side-effects, then the screen wouldn’t change, and the whole computation would be useless. We do need side-effects, because they lead to a screen display, physical-computing stuff such as motion & sound, saved results, messages going somewhere, etc.

What we don’t need, is state sprinkled across just about every single line of our code…

To get back to React: that’s exactly where it revolutionises the world of user interfaces. There’s a central repository of “the truth”, which is in fact usually nothing more than a deeply nested JavaScript data structure, from which everything shown on the web page is derived. No more messing with the DOM, putting all sorts of state into it, having to update stuff everywhere (and all the time!) for dynamic real-time apps.

React (a.k.a. ReactJS) treats an app as a pipeline: state => view => DOM => screen. The programmer designs and writes the first two, React takes care of the DOM and screen.

I’ll get back to ClojureScript, please hang in there…

What’s missing in the above, is user interaction. We’re used to the following:

    mouse/keyboard => DOM => controller => state

That’s the Model-View-Controller (MVC) approach, as pioneered by Smalltalk in the 80’s. In other words: user interaction goes in the opposite direction, traversing all those steps we already have in reverse, so that we end up with modified state all the way back to the disk.

This is where AngularJS took off. It was founded on the concept of bi-directional bindings, i.e. creating an illusion that variable changes end up on the screen, and screen interactions end up back in those same variable – automatically (i.e. all taken care of by Angular).

But there is another way.

Enter “reactive programming” (RP) and “functional reactive programming” (FRP). The idea is that user interaction still needs to be interpreted and processed, but that the outcome of such processing completely bypasses all the above steps. Instead of bubbling back up the chain, we take the user interaction, define what effect it has on the original central-repository-of-the-truth, period. No figuring out what our view code needs to do.

So how do we update what’s on screen? Easy: re-create the entire view from the new state.

That might seem ridiculously inefficient: recreating a complete screen / web-page layout from scratch, as if the app was just started, right? But the brilliance of React (and several designs before it, to be fair) is that it actually manages to do this really efficiently.

Amazingly so in fact. React is faster than Angular.

Let’s step back for a second. We have code which takes input (the state) and generates output (some representation of the screen, DOM, etc). It’s a pure function, i.e. it has no side effects. We can write that code as if there is no user interaction whatsoever.

Think – just think– how much simpler code is if it only needs to deal with the one-way task of rendering: what goes where, how to visualise it – no clicks, no events, no updates!

Now we need just two more bits of logic and code:

  1. we tell React which parts respond to events (not what they do, just that they do)

  2. separately, we implement the code which gets called whenever these events fire, grab all relevant context, and report what we need to change in the global state

That’s it. The concepts are so incredibly transparent, and the resulting code so unbelievably clean, that React and its very elegant API is literally taking the Web-UI world by storm.

Back to ClojureScript

So where does ClojureScript fit in, then? Well, to be honest: it doesn’t. Most people seem to be happy just learning “The React Way” in normal main-stream JavaScript. Which is fine.

There are some very interesting projects on top of React, such as Redux and React Hot Loader. This “hot loading” is something you have to see to believe: editing code, saving the file, and picking up the changes in a running browser session without losing context. The effect is like editing in a running app: no compile-run-debug cycle, instant tinkering!

Interestingly, Tcl also supported hot-loading. Not sure why the rest of the world didn’t.

Two weeks ago I stumbled upon ClojureScript. Sure enough, they are going wild over React as well (with Om and Reagent as the main wrappers right now). And with good reason: it looks like Om (built on top of React) is actually faster than React used from JavaScript.

The reason for this is their use of immutable data structures, which forces you to not make changes to variables, arrays, lists, maps, etc. but to return updated copies (which are very efficient through a mechanism called “structural sharing”). As it so happens, this fits the circular FRP / React model like a glove. Shared trees are ridiculously easy to diff, which is the essence of why and how React achieves its good performance. And undo/redo is trivial.

Hot-loading is normal in the Clojure & ClojureScript world. Which means that editing in a running app is not a novelty at all, it’s business as usual. As with any Lisp with a REPL.

Ah, yes. You see, Clojure and ClojureScript are Lisp-like in their notation. The joke used to be that LISP stands for: “Lots of Irritating Little Parentheses”. When you get down to it, it turns out that there are not really much more of those than parens / braces in JavaScript.

But notation is not what this is all about. It’s the concepts and the design which matter.

Clojure (and ClojureScript) seem to be born out of necessity. It’s fully open source, driven by a small group of people, and evolving in a very nice way. The best introduction I’ve found is in the first 21 minutes of the same video linked to at the start of this post.

And if you want to learn more: just keep watching that same video, 2:30 hours of goodness. Better still: this 1 hour video, which I think summarises the key design choices really well.

No static typing as in Go, but I found myself often fighting it (and type hints can be added back in where needed). No callback hell as in JavaScript & Node.js, because Clojure has implemented Go’s CSP, with channels and go-routines as a library. Which means that even in the browser, you can write code as if there where multiple processes, communicating via channels in either synchronous or asynchronous fashion. And yes, it really works.

All the libraries from the browser + Node.js world can be used in ClojureScript without special tricks or wrappers, because – as I said – CLJ & CLJS embrace their host platforms.

The big negative is that CLJ/CLJS are different and not main-stream. But frankly, I don’t care at this point. Their conceptual power is that of Lisp and functional programming combined, and this simply can’t be retrofitted into the popular languages out there.

A language that doesn’t affect the way you think about programming, is not worth knowing — Alan J. Perlis

I’ve been watching many 15-minute videos on Clojure by Tim Baldridge (it costs $4 to get access to all of them), and this really feels like it’s lightyears ahead of everything else. The amazing bit is that a lot of that (such as “core.async”) catapults into plain JavaScript.

As you can probably tell, I’m sold. I’m willing to invest a lot of my time in this. I’ve been doing things all wrong for a couple of decades (CLJ only dates from 2007), and now I hope to get a shot at mending my ways. I’ll report my progress here in a couple of months.

It’s not for the faint of heart. It’s not even easy (but it is simple!). Life’s too short to keep programming without the kind of abstractions CLJ & CLJS offer. Eh… In My Opinion.

JeeLabs: Space tools

$
0
0

It’s a worrisome sign when people start to talk about tools. No real work to report on?

With that out of the way, let’s talk about tools :) – programming tools.

Everyone has their favourite programmer’s editor and operating system. Mine happens to be Vim (MacVim) and Mac OSX. Yours will likely be different. Whatever works, right?

Having said that, I found myself a bit between a rock and a hard place lately, while trying out ClojureScript, that Lisp’y programming language I mentioned last week. The thing is that Lispers tend to use something called the REPL– constantly so, during editing in fact.

What’s a REPL for?

Most programming languages use a form of development based on frequent restarts: edit your code, save it, then re-run the app, re-run the test suite, or refresh the browser. Some development setups have turned this into a very streamlined and convenient fine art. This works well – after all, why else would everybody be doing things this way, right?

Edit file

But there’s a drawback: when you have to stop the world and restart it, it takes some effort to get back to the exact context you’re working on right now. Either by creating a good set of tests, with “mocks” and “spies” to isolate and analyse the context, or by repeating the steps to get to that specific state in case of interactive GUI- or browser-based apps.

Another workaround, depending on the programming language support for it, is to use a debugger, with “breakpoints” and “watchpoints” set to stop the code just where you want it.

But what if you could keep your application running – assuming it hasn’t locked up, that is? So it’s still running, but just not yet doing what it should. What if we could change a few lines of code and see if that fixes the issue? What if we could edit inside a running app?

What if we could in fact build an app from scratch this way? Take a small empty app, define a function, load it in, see if it works, perhaps call the function from a console-style session running inside the application? And then iterate, extend, tweak, fix, add code… live?

This is what people have been doing with Lisp for over half a century. With a “REPL”:

Edit repl

A similar approach has been possible for some time in a few other languages (such as Tcl). But it’s unfortunately not mainstream. It can take quite some machinery to make it work.

While a traditional edit-save-run cycle takes a few seconds, REPL-based coding is instant.

A nice example of this in action is in Tim Baldridge’s videos about Clojure. He never starts up an application in fact: he just fires up the REPL in an editor window, and then starts writing little pieces of code. To try it out, he hits a key combination which sends the parenthesised form currently under the cursor to the REPL, and that’s it. Errors in the code can be fixed and resent at will. Definitions, but also little test calls, anything.

More substantial bits of code are “require’d” in as needed. So what you end up, is keeping a REPL context running at all times, and loading stuff into it. This isn’t limited to server-side code, it also works in the browser: enter “(js/alert "Hello")” and up pops a dialog. All it takes is the REPL to be running inside the browser, and some websocket magic. In the browser, it’s a bit like typing everything into the developer console, but unlike that setup, you get to keep all the code and trials you write – in the editor, with all its conveniences.

Figwheel

Another recent development in ClojureScript land is Figwheel by Bruce Hauman. There’s a 6-min video showing an example of use, and a very nice 45-min video where he goes into things in a lot more detail.

In essence, Figwheel is a file-driven hot reloader: you edit some code in your editor, you save the file, and Figwheel forces the browser (or node.js) to reload the code of just that file. The implementation is very different, but the effect is similar to Dan Abramov’s React Hot Reloader– which works for JavaSript in the browser, when combined with React.

There are some limitations for what you can do in both the REPL-based and the Figwheel approach, but if all else fails you can always restart things and have a clean slate again.

The impact of these two approaches on the development process are hard to understate: it’s as if you’re inside the app, looking at things and tweaking it as it runs. App restarts are far less common, which means server-side code can just keep running as you develop pieces of it further. Likewise, browser side, you can navigate to a specific page and context, and change the code while staying on that page and in that context. Even a scroll position or the contents of an input box will stay the same as you edit and reload code.

For an example Figwheel + REPL setup running both in the browser and in node.js at the same time, see this interesting project on GitHub. It’s able to do hot reloads on the server as well as on (any number of) browsers – whenever code changes. Here’s a running setup:

Edit figwheel

And here’s what I see when typing “(fig-status)” into Figwheel’s REPL:

Figwheel System Status
----------------------------------------------------
Autobuilder running? : true
Focusing on build ids: app, server
Client Connections
     server: 1 connection
     app: 1 connection
----------------------------------------------------

This uses two processes: a Figwheel-based REPL (JVM), and a node-based server app (v8). And then of course a browser, and an editor for actual development. Both Node.js and the browser(s) connect into the Figwheel JVM, which also lets you type in ClojureScript.

Spacemacs

So what do we need to work in this way? Well, for one, the language needs to support it and someone needs to have implemented this “hot reload” or “live code injection” mechanism.

For Figwheel, that’s about it. You need to write your code files in a certain way, allowing it to reload what matters without messing up the current state – “defonce” does most of this.

But the real gem is the REPL: having a window into a running app, and peeking and poking at its innards while in flight. If “REPL” sounds funny, then just think of it as “interactive command prompt”. Several scripting languages support this. Not C, C++, or Go, alas.

For this, the editor should offer some kind of support, so that a few keystrokes will let you push code into the app. Whether a function definition or a printf-type call, whatever.

And that’s where vim felt a bit inadequate: there are a few plugins which try to address this, but they all have to work around the limitation that vim has no built-in terminal.

In Emacs-land, there has always been “SLIME” for traditional Lisp languages, and now there is “CIDER” for Clojure (hey, I didn’t make up those names, I just report them!). In a long-ago past, I once tried to learn Emacs for a very intense month, but I gave up. The multi-key acrobatics is not for me, and I have tons of vim key shortcuts stashed into muscle memory by now. Some people even point to research to say that vim’s way works better.

For an idea of what people can do when they practically live inside their Emacs editor, see this 18-min video. Bit hard to follow, but you can see why some people call Emacs an OS…

Anyway, I’m not willing to unlearn those decades of vim conventions by now. I have used many other editors over the years (including TextMate, Sublime Text, and recently Atom), but I always end up going back. The mouse has no place in editing, and no matter how hard some editors try to offer a “vim emulation mode”, they all fail in very awkward ways.

And then I stumbled upon this thing. All I can say is: “Vim, reloaded”.

Wow – a 99% complete emulation, perhaps one or two keystrokes which work differently. And then it adds a whole new set of commands (based on the space bar, hence the name), incredibly nice pop-up help as you type the shortcuts, and… underneath, it’s… Emacs ???

Spacemacs comes with a ton of nice default configuration settings and plugins. Other than some font changes, some extra language bindings, I hardly change it. My biggest config tweak so far has been to make it start up with a fixed position and window size.

So there you have it. I’m switching my world over to ClojureScript as main programming language (which sounds more dramatic than it is, since it’s still JavaScript + browser + node.js in the end), and I’m switching my main development tool to Emacs (but that too is less invasive than it sounds, since it’s Vim-like and I can keep using vim on remote boxes).

JeeLabs: Hundertwasser

$
0
0

No techie post this time, just some pictures from a brief trip last week to Magdeburg:

IMG 0727

… and on the inside, even more of a little playful fantasy world:

IMG 0700

This was designed by the architect Friedensreich Hundertwasser at the turn of this century. It was the last project he worked on, and the building was in fact completed after his death.

Feels a bit like an Austrian (and more restrained) reincarnation of Antoni Gaudí to me.

A playful note added to a utilitarian construction – I like it!

John Cantor : Heat Pump Performance Monitoring Examples

$
0
0
On 10th September I will be giving a brief presentation at the Ground Source Heat Pump Expo at the Ricoh Arena on the topic ‘Energy or Performance Monitoring’, so its timely to do a little blog here to elaborate on some of the examples I will be showing from some OpenEnergyMonitor dashboards I have been using

Note; The below are just examples for this blog, and don't necessarily show the whole story.

I have been working with OpenEnergyMonitor for some time, and now have various installs using OEM kit from Megni.co.uk
In brief, most of the systems I have installed use around 8 temperature sensors, CT power measurement with voltage sensing (real power including power factor), and/or pulse counting from standard electrical kWh meter.  I have used Grundfoss VFS flow sensors, but we are currently working on the direct interrogation of a Kamstrup heat meter, giving heat output

Data is sent via Ethernet to www.emoncms.org and displayed on dashboards (as samples below). These real-time graphs give a fantastic tool for the installer and the home owner. They show exactly what is happening now, and what has happened over the previous hour/week/month or year. The information can be used to improve the design of a system and also be used to fine tune the user settings.

Let’s start with a SIMPLE dashboard example

This type of dashboard can be accessed on any internet-connected computer using www.emoncms.org    e.g.  www.emoncms.org/example

The dashboard above shows a bar graph of daily energy input to the heat pump. This can be checked periodically for unusual values. It is showing high use on 15th March. By mousing-over the graph we can see that on this day, 24.2 kWh were used. The reason for this high use could be investigated.  The time period can easily be changed to anything you wish by zooming in or out.  Below this is the outside temperature. This might be interesting on its own right, but may be interesting if compared to energy used per day.   To the right are a few useful dials and figures -  cylinder temperature, room and outside temperature – things any home owner might like to know.    We can see that as from 6th April, the system is switched off.  
This type of simple dashboard is ideal for the home owner, but we can make as many dashboards we like, of varying complexity and detail.  These are very useful for installers and designers, and a far more in depth analysis can be made.

Let’s start with a good example of a GSHP connected to underfloor heating.

This is a 12kW (max output) inverter-drive GSHP operating for a 40 minute period here. The green area represents electrical input and the purple represents heat (direct reading from Kamstrup meter).  The ratio if these two areas gives the COP.  We can see the flow and return temperatures slowly ramping up to a final flow temperature of only 32°C.  Since this is September, the ground collector is exceptionally warm. This, along with the low flow temperature explains why the COP is currently almost 6.  Current conditions are ideal, but from tests earlier in the year, we expect to see average COPs for heating in excess of 4.
This graph is showing us a healthy flow-return dt of 6 degrees.  It is also showing how nicely the speed of the compressor drops in response to the rising flow temperature.


Below is another snip. This time from a fixed-speed GSHP. This one shows the source temperature too.

This shows a period of about 1/3rd of a day, and approx. 30min. run durations which is quite acceptable. The flow and return is nice and low, with average flow temperature around 30C.  The underfloor is a good design here, but he source is dipping below zero. This is not ideal, but a zoom-out of yearly temperatures and knowledge of total heat used would give better understanding.  In this case, since the underfloor is so good, it may be acceptable to have a slightly inferior ground source.

Next, a good example of a GSHP heating a domestic hot water cylinder.  The cylinder is copper with the heat exchanger coil in the bottom section of the cylinder. The heat pump is only 3.5kW and the coil is a nice large 3sq m.  


This example is showing the heat pump electrical input power as the shaded yellow area  (no heat meter fitted). The heating period starts at about 1.3kw input and finishes at around 1.7kw.  It also shows four temperatures;  cylinder top and bottom, and the flow and return temperatures.

This graph is showing the early evening heating period having been off by a time clock.
As we can see, the top of the cylinder is still at a useable 50°C before heating, but the bottom has dropped to 40°C.  The 24 minute heating period shown here starts by heating the bottom water from 40°C.  Indeed, this system has been set up carefully to ensure the system heats from a lower starting temperature.  The heat pump ‘sees’ flow and return temperatures of only 45/40°C at the start.  The 40°C cylinder bottom (not very hot) ‘pulls down’ the heat pump working temperatures, resulting in a high energy-efficiency.  By looking up the heat pumps performance data, we can estimate the average COP with reasonable accuracy; here it is about 3.5 at the start of the heating-up cycle.
As the cylinder warms, we can observe the point just before 18:00 where the bottom is becoming warmer than the top, and natural convection causes the top of the cylinder to rise with the bottom. After about 25 mins the whole cylinder has reached about 53°C.  At the end, the heat pump ‘sees’ temperatures of 55/52°C.  This is getting quite hot, and getting close to the limit of the heat pump’s comfort zone. The COP here may be about 2.8.  (taken from heat pump data).  We can then look at the period of time the heat pump has spent at different COPs, and estimate the COP for the whole DHW heating session. Its somewhere around 3.05.

If the system were enabled 24/7, and the sensor position not optimised,  the lower cylinder would not  drop so far, so the cylinder would  heat  more frequently from a higher starting point. The average working temperature would be much higher, so COP would be lower.  At worse, the COP could be not much better than 2.8.  Added to this, losses from the pipe run, and starting-up losses could result in worse performance.

We can therefore use the monitor to enable us to set the system to operate at a low average temperature, but for the cylinder top to remain at a useful temperature (e.g. say 47°C).
We can also see how nice and close the final cylinder temperature (53°C) is to the maximum flow temperature of the heat pump (55°C). This minimises the need for immersion heater (with COP of only 1).  In this case, the compact copper heat exchanger is exceptionally large compared to the heat pump size, and the coil is also only occupying the lower section of the cylinder.  This gives exceptionally good results, and allows us to heat some of the water  in a ‘batch’ from a colder starting point.

For the next example we have a complete contrast. This is a very inefficient system!

This one is a 14kW  ASHP.  The heat pump is fine, and function exceptionally well,  but the cylinder heat exchanger is debatably a little small for this big heat pump.  

Looking at the graph, heating starts when the middle of the cylinder is 48C.  the flow temperature s runs up to 60C within 15mins, at which point, the input power drops and the heat pump ‘tracks’ the 60C flow temperature.  After 30mins running, the cylinder is 55C.  The flow temperature here is considerable higher than our previous example. In part due to the heat pump NOT reducing its speed, and in part due to the smaller heat exchanger coil, but it has not done too bad.  
However, the period after 55°C is clearly grossly inefficient.    We can see that the compressor switches off frequently and spends the next 3.5 hrs! attempting to achieve 60°C.  The other thing to mention here is that the distance between the heat pump and cylinder is around 15m. What is actually happening is that most of the heat is simply being lost from the pipe run.  The energy consumed is shown by the yellow area of the power plot.  The final 5 degrees (to 60C) uses several times the power (area) of the first section from 1 to 2.   
The biggest problem here is poor use of the controls.Clearly, it would make a lot of sense to adjust the hot water setting to 55°C so that the heat pump stops.  .   
The final ‘floor heating’ period is just as terrible as the DHW period. Here, only 1 or 2 underfloor zones are open so the flow rate is far too low, as can be seen by the large temperature difference between the flow and return.  This is in excess of 10 degrees. Again, heat dissipated by the floor is far too small for this large heat pump.
This is a clear case of an over-sized heat pump connected to a cylinder and emitter system. A smaller unit would work far better.


Finally, just to top anyone up with a little heat pump theory, I am adding a graph that I used to illustrate heat pump efficiency v the output temperature.   If there is one thing to learn about heat pumps – LEARN THIS.

Here we have the characteristics of 2 sample heat pumps.  The vertical Y axis shows the efficiency, the COP. A 3kW immersion gives 3 kW of heat. It has a COP of 1. Heat pumps give out more heat than they consume because they extract heat from outside.  The X axis shows operating output temperatures ranging from tepid on the left to very hot on the right. 

I am showing 2 typical heat pumps. A typical (R407C refrigerant) unit can reach say 55°C, whilst a ‘high temperature’ (134A refrigerant)  unit may achieve 65°C.  Anyhow whatever type, we can see that to the left, where the water is lukewarm, the heat pump has an easy time, hence the COP is very high (1kw in for 4.5kW of heat). I liken this to driving a car up a slight incline. We should get good fuel economy here, maybe 50mpg .  However, it we heat up to 65°C, the temperature ‘lift’ is great, and this is a little like driving a car up a steep incline – we are in a low gear and the MPG is only 20!.    In the same way that you will NEVER get good fuel economy when driving up a very long steep hill, you will not get a good COP when heating to a high temperature.  That said, it should always be better than using an immersion heater.

So, knowledge of the performance of any heat pump should be understood, and data should be available for all models relating to output temperatures at specific ground source or air-source temperatures. 

If you have a high temperature heat pump, it doesn't mean you always have to operate it at a high temperature.  If you operate it at lower temperatures, the performance should be far better. It is however always a good idea to find out the working limits of you unit.

Some of you may have wondered about the flattening of the curve at very low temperatures.  I have drawn it that way because some heat pumps are not very good at the extremities (limits) of their performance. However, I am finding (and partly guessing) that most inverter heat pumps with electronic expansion valves work very well over a very wide range, so some heat pumps will easily exceed COPs of 5 in ideal conditions (usually late spring or autumn, when not much heating is needed) .Never forget mid-winter conditions - this is the time we need most heat, so the operational area to focus on.


So, it is with an understanding of the characteristics of heat pumps, performance monitoring can be used to great advantage. In general, we want heat pumps to spend as much time at lower output temperatures ( and high source temperatures) as possible.


JeeLabs: Bandwagons and islands

$
0
0

I’ve always been a fan of the Arduino ecosystem, hook, line, and sinker: that little board, with its AVR microcontroller, the extensibility, through those headers and shields, and the multi-platform IDE, with its simple runtime library and access to all its essential hardware.

So much so, that the complete range of JeeNode products has been derived from it.

But I wanted a remote node, a small size, a wireless radio, flexible sensor options, and better battery lifetimes, which is why several trade-offs came out differently: the much smaller physical dimension, the RFM radio, the JeePort headers, and the FTDI interface as alternative for a built-in USB bridge. JeeNodes owe a lot to the Arduino ecosystem.

That’s the thing with big (even at the time) “standards”: they create a common ground, around which lots of people can flock, form a community, and extend it all in often quite surprising and innovative ways. Being able to acquire and re-use knowledge is wonderful.

The Arduino “platform” has a bandwagon effect, whereby synergy and cross-pollination of ideas lead to a huge explosion of projects and add-ons, both on the hardware as on the software side. Just google for “Arduino” … need I say more?

Yet sometimes, being part of the mainstream and building on what has become the “baseline” can be limiting: the 5V conventions of early Arduino’s doesn’t play well with most of the newer sensor chips these days, nor is it optimal for ultra low-power uses. Furthermore, the Wiring library on which the Arduino IDE’s runtime is based is not terribly modular or suitable for today’s newer µC’s. And to be honest, the Arduino IDE itself is really quite limited compared to many other editors and IDE’s. Last but definitely not least, C++ support in the IDE is severely crippled by the pre-processing applied to turn .ino files into normal .cpp files before compilation.

It’s easy to look back and claim 20-20 vision in hindsight, so in a way most of these issues are simply the result of a platform which has evolved far beyond the original designer’s wildest dreams. No one could have predicted today’s needs at that point in time.

There is also another aspect to point out: there is in fact a conflict w.r.t. what this ecosystem is for. Should it be aimed at the non-techie creative artist, who just wants to get some project going without becoming an embedded microelectronics engineer? Or is it a playground for the tech geek, exploring the world of physical computing, diving in to learn how it works, tinkering with every aspect of this playground, and tracing / extending the boundaries of the technology to expand the user’s horizon?

I have decades of software development experience under my belt (and by now probably another decade of physical computing), so for me the Arduino and JeeNode ecosystem has always been about the latter. I don’t want a setup which has been “dumbed down” to hide the details. Sure, I crave for abstraction to not always have to think about all the low-level stuff, but the fascination for me is that it’s truly open all the way down. I want to be able to understand what’s under the hood, and if necessary tinker with it.

The Arduino technology doesn’t have that many secrets any more for me, I suspect. I think I understand how the chips work, how the entire circuit works, how the IDE is set up, how the runtime library is structured, how all the interrupts work together, yada, yada, yada.

And some of it I’m no longer keen to stick to: the basic editing + compilation setup (“any editor + makefiles” would be far more flexible), the choice of µC (so many more ARM fascinating variants out there than what Atmel is offering), and in fact the whole premise of using an edit-compile-upload-run seems limiting (over-the air uploads or visual system construction anyone?).

Which is why for the past year or so, I’ve started bypassing that oh-so-comfy Arduino ecosystem for my new explorations, starting from scratch with an ARM gcc “toolchain”, simple “makefiles”, and using the command-line to drive everything.

Jettisoning everything on the software side has a number of implications. First of all, things become simpler and faster: less tools to use, (much) lower startup delays, and a new runtime library which is small enough to show the essence of what a runtime is. No more.

A nice benefit is that the resulting builds are considerably smaller. Which was an important issue when writing code for that lovely small LPC810 ARM chip, all in an 8-pin DIP.

Another aspect I very much liked, is that this has allowed me to learn and subsequently write about how the inside of a runtime library really works and how you actually set up a serial port, or a timer, or a PWM output. Even just setting up an I/O pin is closer to the silicon than the digitalWrite(...) abstraction provided by the Arduino runtime.

… but that’s also the flip side of this whole coin: ya gotta dive very deep!

By starting from scratch, I’ve had to figure out all the nitty gritty details of how to control the hardware peripherals inside the µC, tweaking bit settings in some very specific way before it all started to work. Which was often quite a trial-and-error ordeal, since there is nothing you can do other than to (re-) read the datasheet and look at proven example code. Tinker till your hair falls out, and then (if you’re lucky) all of a sudden it starts to work.

The reward for me, was a better understanding, which is indeed what I was after. And for you: working examples, with minimal code, and explained in various weblog posts.

Most of all this deep-diving and tinkering can now be found in the embello repository on GitHub, and this will grow and extend further over time, as I learn more tricks.

Embello is also a bit of an island, though. It’s not used or known widely, and it’s likely to stay that way for some time to come. It’s not intended to be an alternative to the Arduino runtime, it’s not even intended to become the ARM equivalent of JeeLib– the library which makes it easy to use the ATMega-based JeeNodes with the Arduino IDE.

As I see it, Embello is a good source of fairly independent examples for the LPC8xx series of ARM µC’s, small enough to be explored in full detail when you want to understand how such things are implemented at the lowest level – and guess what: it all includes a simple Makefile-based build system, plus all the ready-to-upload firmware.bin binary images. With the weblog posts and the Jee Book as “all-in-one” PDF/ePub documentation.

Which leaves me at a bit of a bifurcation point as to where to go from here. I may have to row back from this “Embello island” approach to the “Arduino mainland” world. It’s no doubt a lot easier for others to “just fire up the Arduino IDE” and load a library for the new developments here at JeeLabs planned for later this year. Not everyone is willing to learn how to use the command line, just to be able to power up a node and send out wireless radio packets as part of a sensor network. Even if that means making the code a bit bulkier.

At the same time, I really want to work without having to use the Arduino IDE + runtime. And I suspect there are others who do too. Once you’ve developed other software for a while, you probably have adopted a certain work style and work environment which makes you productive (I know I have!). Being able to stick to it for new embedded projects as well makes it possible to retain that investment (in routine, knowledge, and muscle memory).

Which is why I’m now looking for a way to get the best of both worlds: retain my own personal development preferences (which a few of you might also prefer), while making it easy for everyone else to re-use my code and projects in that mainstream roller coaster fashion called “the Arduino ecosystem”. The good news is that the Arduino IDE has finally evolved to the point where it can actually support alternate platforms, including ARM.

We’ll see how it goes… all suggestions and pointers welcome!

Viewing all 328 articles
Browse latest View live