Extending the lambda function to get user input

Extending the lambda function to interpret the query for a picture of someone or something.

Advertisements

What’s next?  I want to extend the lambda function so I can ask to mirror to show me a picture of something or someone.  The mirror will interpret this and get the picture for me.  But first things first.  I’ll be extending my lambda function to interpret slots.  A slot is a sort of parameter you can pass in your Alexa intent schema.  Something like “show me a picture of <abc>”.  In this example <abc> is a slot of type LITERAL.  Alexa will interpret you sentence and automatically assign what you said to the slot.  It’s up to the lambda function to extract this info and do something with it.

Below you can find my lambda function.  I added another intent at the bottom.  I called it “ShowPictureIntent”, and I will be configuring it later on in my Alexa intent schema.

Schermafbeelding 2017-02-10 om 20.49.04.png

So what do we see here?  First we check if there are slots in the intent, and if there are, if an attribute is present called “Content”.  This is an arbitrary chosen attribute name.  It could be anything, but I chose to call it content, because it might server another purpose in the future as well.  If there’s no such thing, it will let Alexa say that it didn’t understand me.

If it did, however, the command is issued to the dynamoDB table.  I created a new command called “SHOW-PICTURE”.  Something my Alexa interface will have to deal with, and I put the content of the slot into a new attribute, also called “Content”.

In order to test this, I have to change the test event configuration.  You can do that in the top menu.

Schermafbeelding 2017-02-10 om 20.23.40.png

Here you can define some JSON, and configure the correct intent, and issue a slot.

Schermafbeelding 2017-02-10 om 20.48.38.png

Here i’m searching for a picture of George Michael.  So here’s the result of the test :

Schermafbeelding 2017-02-10 om 20.23.30.png

And how the command shows up in the DynamoDB table.

Schermafbeelding 2017-02-10 om 20.23.21.png

Now for the skill setup.  We adjust the intent schema like so :

Schermafbeelding 2017-02-10 om 20.50.03.png

we define a new intent, with the same name as we’re handling in the function.  And we define a slot, with name “Content”, just as we expect it.  We want to tell the skill to expect an arbitrary string.  This can be done by assigning the type “AMAZON.LITERAL”.  For more info on the different slot types, you can see this documentation.

The only thing left to configure is the utterances.  Like so :

Schermafbeelding 2017-02-10 om 20.50.09.png

Notice how I created 2 different utterances, and the syntax for referencing the slot.  Because it’s a slot of type literal, we need to tell Alexa an example of what we’re expecting.  What comes before the pipe symbol is crucial here.  We want it to expect a first name and a last name.  Otherwise, if you just type “name” for example and if you ask it for “George Michael”, it will pass on “Michael” ; just the last word.  Let’s test it :

Schermafbeelding 2017-02-10 om 20.54.32.png

I added some some more options to be complete :

Schermafbeelding 2017-02-11 om 12.57.44.png

Creating the hide all modules Alexa skill

Creating our own custom smart mirror skill based upon the hello world example.

Now it’s time to create a new Alexa skill, dedicated to our smart mirror.  We will be expanding the functionality of the skill gradually, but let’s start with the most basic functionality : hiding the modules on the mirror.

You might remember from my last post that you need the US-East region in AWS to be able to make lambda functions. However, if you remember the post where I created the first table in the DynamoDB setup, it was created in a different region.  You’ll need to complete those steps again in the US-East region in order for the lambda function and the DynamoDB service to communicate.  Check out this post for more info.  Note that you’ll have to change you aws.config file as well in order to connect to it using the debug hub.

 

Anyways, let’s create our function.

The first thing I did was copy the “Hello world” example on my disk to a new folder.  I opened it in Atom and replaced the “Hello world” references to SmartMirror references, like this :

Schermafbeelding 2017-02-04 om 10.27.51.png

That’s a good starting point. That way I was able to zip the contents  and upload it to AWS console.  The rest of the editing can be done inside the AWS console.

When configuring the function you can choose the settings below.  I created a new role from a template and chose the basic lambda execution rule.  Once saved the config looks like this :

Schermafbeelding 2017-02-03 om 23.23.17.png

From experience I know that using this role will cause the lambda function to give an error when accessing the DynamoDB table.  The reason is a lack of permissions.  This is easy to solve.  You have to open the IAM module.  The one you used in an earlier post to create the SmartMirror user.

Locate the role by navigating to the roles in the left hand pane, and in the Permissions tab, click “Attach Policy”.  Locate the “AmazonDynamoDBFullAccess” policy.  In fact, this is a bit overkill.  You could setup a policy to only access the CommandQueue table, but in our case this project is only intended for personal use, so i’m not worried about security right now.

Schermafbeelding 2017-02-03 om 23.18.22.png

Now that we’ve got the permissions thing out of the way, we can start looking at the code.

Go to the code of the function, and on top enter the following code :

Schermafbeelding 2017-02-03 om 23.21.10.png

That way you’ll include the AWS SDK, and instantiate a new DynamoDB interface.

In the IntentHandlers function : create a new entry for a “HideAllModulesIntent”, which we’ll be defining later on in the Alexa Skill.  The code for it is displayed below :

Schermafbeelding 2017-02-04 om 10.20.16.png

Small note: the code doesn’t contain the definition of the “Message” variable, just add it above these lines of code.

Notice how it uses the “putItem” method.  It takes a parameter (in the form a JSON format) and a callback function.  The parameter contains

  • The tablename
  • The content of the item

Notice the special syntax.  Every item is a different JSON attribute as well.  The callback function is fairly simple.  If the “err” object is not undefined, it means there’s been an error, and we need to act accordingly.  If Not, it succeeded.  Note that we call the “response.tellWithCard” function in both branches inside the callback function. You would think that’s not very optimal and put it behind the “putItem” call.  Well, been there done that.  That doesn’t work.  It turns out that lambda function returns when the “tellWithCard” function is called, and when doing that just after the “putItem” call doesn’t give the “putItem” enough time to complete its task. That means no error will be returned, but the callback function won’t be called either.  The solution is the code above.

Save the function.  You can test it with the button above.

Schermafbeelding 2017-02-03 om 23.20.49.png

When doing so, you can see the result :

Schermafbeelding 2017-02-03 om 23.20.53.png

And the log

Schermafbeelding 2017-02-03 om 23.21.00.png

Let’s log on to the DynamoDB console and refresh the contents of the table :

Schermafbeelding 2017-02-04 om 10.16.25.png

That worked like a charm.

Now let’s create the Alexa Skill.  Go to the Alexa Skills overview and create a new one.  Empty template & settings below.

Schermafbeelding 2017-02-04 om 10.52.48.png

I changed the invocation name of the “hello world” sample to “hello world”, so i can use “smart mirror” for this one.  If I were to choose the same one, Alexa wouldn’t know which one to choose.

The interaction mode is shown below and resembles the “hello world” example, only customised for the smart mirror.

Schermafbeelding 2017-02-03 om 23.22.46.png

Configure the endpoint with the ARN of the lambda function.  I’ve got no screenshot of that, but it’s quite straight forward.

You can easily test it :

Schermafbeelding 2017-02-03 om 23.23.48.png

And there we have it.  Remember to change the APP_ID in the lambda function code in order to let them communicate.

If you change the Session ID parameter in the code of the debug hub, you can also see the output in the debug hub :

Schermafbeelding 2017-02-04 om 11.08.55.png

Good luck!

 

 

 

Creating the MMM-alexa-interface module

Creating my own module to act as a proxy for incoming alexa feedback.

So, for my next step I’ll be creating my own module, acting as the proxy for the Alexa feedback.  All it’s supposed to do is capture the commands and the content sent by Amazon, and act accordingly… based on some configuration of course.

So the first thing to do is create an extra folder, and create a .js file and a node helper.  You can find all the info about that in the Magic Mirror Builders section on creating your own module here.

The result in Atom looks like this :

Schermafbeelding 2017-01-27 om 18.54.48.png

So, below is the contents of my js file.  I’ve implemented a very simple start function.  Just updating the DOM every minute.

Schermafbeelding 2017-01-27 om 18.55.04.png

 

What’s more interesting is the socketNotificationReceived override.  This function receives all communications from the node helper of the function. The node helper is the file where you can specify your server side code.  You can communicate with the javascript file handling the front end story using sockets.  All I do in this function is handle all of the commands I’ve specified up until now.  You can se how the “hide-all-modules” and “show-default-modules” handle the showing and hiding of the modules using the MM variable.  Furthermore, I’ve implemented the “alexa-activated” and “alexa-deactivated” commands by sending a notification to the alert module I’ve installed on the mirror.  All it’s supposed to do is show an alert of type “notification”, and show a matching title and message.

Schermafbeelding 2017-01-27 om 18.55.11.png

The node helper code is fairly simple.  The start function override just starts a routing from the express framework, just like in our tests before.  All it does is send a socket notification to the front end where it is handled.

Schermafbeelding 2017-01-27 om 18.55.22.png

All we need to do now is add it to the configuration of the mirror like in the screenshot below. I’ve got no specific settings up until now, it’s also fairly simple.

Schermafbeelding 2017-01-27 om 18.54.29.png

Starting up the debug hub, and pressing the buttons.  Here’s the result.  Exactly what I was hoping for.

alexamirrorcomm.gif

Good luck!

 

 

Create a connection from the Java Client

Setting up a quick test for the connection between the Java client and the debug hub.

It’s time for the next phase in the project.  So far we’ve primarily focused on the “output” side of the story : displaying stuff on the mirror.  Now we’re going to focus on the “input” : accepting voice commands from Alexa Voice Services.

In this article I’ll be attempting to send a message from the Java client to the debug hub.

I’ll be very practical on this one, nothing too fancy.  I’ll be using one of the buttons on the window of the client to start a request.

I added all of this code to the AVSApp.java file.

First, import some packages I’ll be using in the new code.

Schermafbeelding 2017-01-23 om 00.04.59.png

After some browsing, I found out that the “createMusicButton” method is responsible for adding the music buttons to the interface.  I’ll be using one of those buttons to simulate.  Below, you can see I commented out the default handler, and added my own call to the testMethod() method.

Schermafbeelding 2017-01-23 om 00.05.53.png

This method is fairly simple.  It just calls another message to do all the dirty work.  I added this level of indirection, because ultimately I want to separate the logic for the mirror interface.  There’s just some basic error handling involved here.

Schermafbeelding 2017-01-23 om 00.05.59.png

Below, you can find the actual message responsible for sending the command.

Schermafbeelding 2017-01-23 om 00.06.05.png

In the above code extract you can see a new URL object is created pointing to the static IP address of my Mac where the debug server is running . Port 3000 is the port it is listening on.  I added the path of the routing, and specified the “GET” method.  The rest of the code is basically some plumbing code to read the response from the debug hub.  In practice, hardly anything will happen with this response, because the Java client will have no visual feedback, apart from logging to the console.

Of course, the above code is very rudimentary.  I will have to refactor the code to be a more robust package in which I can call the get method with multiple parameters in multiple circumstances, and where the connection, for example, is read from the config file.  But all in due time.

Below, you can see what happens when I click the button.

javaclient-connction.gif

Good luck!

Raspberry Pi sound output

How a Raspberry Pi is configured to switch between a 3.5mm Jack sound output to HDMI sound output.

So, up until now, I was using a headphone to listen to Alexa’s responses to my questions.  It was time to experiment with sound output. Check out my Amazon Voice Services setup procedure here.

I figured, the easiest way to get sound is through the HDMI interface to the screen.  The screen I’m using contains speakers, so no extra external speakers would be necessary.

There are 2 things you need to do to force the Raspberry Pi to relay the sound output to the HDMI interface.

Log on to the Pi using ssh and type

sudo raspi-config.

Navigate to the advanced options, and choose “Audio”.

Schermafbeelding 2017-01-12 om 18.38.35.png

Select the third option : “Force HDMI”.

Schermafbeelding 2017-01-12 om 18.39.01.png

Confirm, and get out of the interface.

Next, go to the root of the device by typing

cd /

navigate to the boot directory

cd boot

Edit the “config.txt” file.

sudo nano config.txt

Search for the “hdmi_drive=2” setting.  It will probably be commented out using a hash tag in front of it.  Just remove the hash tag.

Schermafbeelding 2017-01-12 om 18.51.49.png

Ctrl-x y enter to get out of the interface.

Reboot the Pi

sudo reboot

Time to do some testing.  If you also have a monitor, go ahead and test Alexa. In my case there was a problem.  The sound worked all right, but there was a lot of latency.  So much latency, the beeping sound Alexa makes when it indicates it has started listening couldn’t be heard.  The first two words (on average) of Alexa’s response were missing as well.

I decided to reverse the settings above (insert the hash tag, force the 3.5mm Jack) and I bought a 14 euro speaker set in the local grocery store.  It’s powered using USB and connects to the 3.5 mm Jack.  It appears to work perfectly.  The problem will probably be integrating them in the mirror.  They are the smalles speakers I sound find, but still wide enough to have a hard time integrating them.  But that’s a bridge I’ll be crossing later.

Enjoy!