Getting and showing the Wolfram alpha picture

Using the wolfram alpha API to get a picture of what was passed along from the lambda function.


In this post I’ll be showing you how I got the picture from Wolfram Alpha and showed it on the mirror.

The newly created MMM-infodisplay module accepts a notification called INFO-DISPLAY.  All it does is pass along the notification and the payload to the node helper using the sendSocketNotification method.

Schermafbeelding 2017-02-17 om 20.31.04.png

Inside the node helper of the MMM-infodisplay I created a new function called getValues.  Essentially, all it does is loop through the keys of a JSON object.  If the property is an object, it will recursively loop through its contents as well.  It will push the results in to an array and return it.

Schermafbeelding 2017-02-17 om 20.34.10.png

In the socketNotificationReceived function I’m handling the notifications passed a long from the module.  First, the URL is built. You can find all of the info on how to build up the URL on the Wolfram Alpha site here.

The encodeURIComponent function will take care of the characters that aren’t supported in an URL like a slash or a space.  I’m just passing on the content part of the payload.  This is set in the Lambda function in one of my earlier posts.  Other than that I’m asking to get the result in a JSON format, and only retain the part of the output containing an image.

I’m using the request module to get an URL and parse the body using a call back function.  Im parsing out the JSON from the body and use the getValues function to get the “src” attribute.  If you investigate the result of the API call well you can see that the URL is contained in this attribute.  If I find something in the returned array I return it using the “display-image” notification in the sendSocketNotification function sending the info back to the MMM-infodisplay module.

Schermafbeelding 2017-02-17 om 20.34.32.png

Here’s the code I do when receiving this notification.  Just set the source of the image to the payload and set the image to visible.

Schermafbeelding 2017-02-17 om 20.31.37.png


Time to test. I just uploaded a SHOW-PICTURE command with content “Michael Jackson” using the debug hub.

Schermafbeelding 2017-02-17 om 20.31.59.png

Let’s see the result :

Schermafbeelding 2017-02-17 om 20.33.39.png

Works like a charm!

Making a connection to Wolfram Alpha

Connecting to the Wolfram Alpha service to get information on a certain subject.

Now we’ve got Alexa talking to our mirror, it’s time to go one step further.  I want to ask Alexa to show a picture of someone or something on the mirror when I ask it to.

I’ll be doing this in a few steps.

  1. I want to be able to do this in the Debug hub.  Period.  No voice interaction or anything.
  2. I want to be able to ask Alexa to show a picture by sending the command on to the mirror, and let the mirror search for an image.  In this scenario Alexa won’t respond with audio (or at least nothing meaningful), but only send the command.
  3. The last scenario is let Alexa search for an image, and have it explain something to me.  In this scenario I want to be able to make a distinction between only visual feedback or both audio and visual feedback.

The scenario whereby there is only audio feedback already exists : that’s the Alexa service itself.

So let’s get cracking.  The first thing to decide is which service to use to get images.  There are a few options.  The first one that comes to mind is Google : the custom search API does something like that.  More information here.  That’s cool.  You get 100 free API calls per day.  That’s 3100 in a long month.

An alternative is Wolfram Alpha.  It is very similar, but you “only” get 2000 calls per month.  I kind of like this one because it gives you a lot of info, and days with a lot of calls can compensate for days with little calls.

Anyway.  I might make an alternative for both, and combine them.  Let’s see.

So let’s start with Wolfram Alpha.  You can get started with API development here.

It’s quite straight forward.  Just register with Wolfram Alpha as a user, and next just create a new application.

Schermafbeelding 2017-02-06 om 20.12.49.png

Just give it a name and a description.

Schermafbeelding 2017-02-06 om 20.13.38.png

And there you have it.

Schermafbeelding 2017-02-06 om 20.14.02.png

Theoretically you could make multiple APP ID’s.  And hence increase the number of calls you can make.  However, it clearly states the maximum number of apps is 20.  I could make one for the debug hub, the mirror and Alexa separately actually.  But let’s start with a single one.

Next thing to do is extend the debug hub.  An extra tab at the left hand side, with a query text area and a button.

Schermafbeelding 2017-02-06 om 20.22.49.png

Writing the code is pretty easy.  Look at the extract below.


The only thing it does is construct an URL and concatenate the value of the text area in, followed by the App ID.  Referencing the text area is done via jQuery.  The text area element is called “querystring”.  The encodeURIComponent takes care of encoding the URL.  Spaces get replaced with %20 and other specific not allowed characters in an URL get replaced as well.

The result is displayed below.  Notice how it’s a whole bunch of information.  The image is in there, but so is a lot of other information we could leverage.  But that’s for later on.

Schermafbeelding 2017-02-06 om 20.32.19.png


Small note: getting the information back from Wolfram Alpha took quite some seconds.  That might prove an issue in the future, but for now, let’s be happy with what we’ve got.


The next step will be extracting the image from this information.  Keep you posted on that one.