Skip to Content
Author's profile photo Paul J. Modderman

Alexa, Remind Me To Blog

Who wants to go to all the trouble to look at things? You have to hold those heavy eyelids open, point both eyes in the same direction, and then apply brain power to perceive what you’re seeing! No thank you, sir. I’ll take my information the old-fashioned way: by shouting to my servant, demanding an answer right away.

Until relatively recently, you designed interactions with computers visual-first: here’s a screen, look at text/pictures, click/tap/type, done. But the world of chatbots and virtual assistants is blowing up. All the big players are producing voice-enabled and text-enabled bots that answer human-style questions, and anything with an API is becoming fair game to integrate into those bots. This fits my perception of the world: machine brains surround us, and will become ubiquitous and powerful in short order.

One of the most powerful current applications of voice interfaces is asking direct questions with simple phrases. Amazon’s Echo device with the Alexa Skills Kit fits the bill perfectly. That’s why I built this demonstration Alexa skill to use with an existing OData service I have. It was surprisingly easy to build the Alexa part, which made it a no-brainer to attach it to something with a lot of power.

If you want to either really impress people with your hacking skills or really annoy coworkers while testing a voice-powered interface, read on to see how I did it.


Here’s what you need before you write a single line of code:

  • An Amazon developer account. Sign up here.
  • Enable AWS on your account
  • You can test out the work you’ve done without any extra pieces of hardware, but to get the full effect, grab any of the Echo hardware from Amazon.
  • An SAP system with a working OData service. Technically speaking, you could also you use some other sort of web interface into your SAP system…but OData is kind of designed for that. So just use what’s easy. I’m using the OData service that powers a BW query application I built.
  • You can start the design of the skill and input the basic skill information.
    • For this example, use skill type “Custom Interaction Model”
    • Choose a name that is small but distinct. “SAP Business Dashboard” fits nicely for this one, but something memorable will work.
    • The invocation name is what Alexa will listen for in the voice interface. Should be a little shorter and easily memorable. “SAP dashboard” for this example – so the skill will be invoked by saying “Alexa, ask SAP dashboard…”

The next part of the wizard is the voice interaction model.

Design Alexa Skill

Alexa has several paths to handle requests that come from an Echo device. You can create a custom skill, which provides a web service to Alexa that does more or less whatever you want conversationally, you can create a smart home skill that uses an adapter to control devices from your home, or you can create a flash briefing skill that lets Alexa read off things from a designated RSS feed. Since I control the OData web service, we’ll use the custom skill path.

Someone who uses this Alexa skill will have to use their voice to activate it, so your first job is to define how the voice interface works. It’s actually fairly simple: you define an intention, which is like a header for an action, and then one or more slots that attach to that intention. The slots are placeholders that define a list of possible values for that slot within the context of the intention. Here’s mine: intents json.

You then define what goes in the LIST_OF_QUERIES separately. For this example, I just picked two queries: a materials BEx query and a customers BEx query. “Customers” and “Materials”, respectively. We’ll add the intent schema and the slot configuration to the Alexa skill setup later.

To finish setting up the voice interface, you provide a list of sample “utterances” to Alexa. This is a list of typical phrases people might use to interact with your skill, and is used to train a model for Alexa to be flexible in interpreting users’ interactions with your skill. Here’s what I provided, starting with the intent name and including placeholders for the slot Query: utterances.


To create the web service for your skill, use Lambda. It’s low-touch, free for a ridiculous number of requests, and lets you plop your code right in to spin up a web function. Simple.

  • Sign into your AWS console and start here.
  • Click “Create a function” on the start screen and you’ll get to the blueprint page.
  • Click the “blank function” template and you’ll see this:
  • Click inside the dotted line box and choose “Alexa Skills Kit”.
  • In the next screen, provide a function name in the first field, and choose Python 2.7 as your runtime.
  • Paste this: lambda_skill-py into the editor. You’ll need to edit it later, because I left some blanks or comment places for you to put your own function code or authentication details.
  • Set the Handler field to [name of your function].lambda_handler. So if your function name is “getCoolStuff”, your handler is “getCoolStuff.lambda_handler”
  • Choose the “lambda_basic_execution” role.
  • Leave the advanced settings as-is, unless you want to increase the timeout value to more than 3 seconds. That depends on how long it might take for your SAP OData service to respond.
  • Click “Next”, then choose “Create function” to finish the setup.

Finishing Touches

Now that you have a Lambda functions dashboard, you can finish the Alexa skill setup.

  • From the Lambda functions dashboard, click on your new function.
  • The top right of the screen will have an ARN field you can copy.
  • Go back into the Alexa skills kit dashboard, click your skill, and go to the “Configuration” wizard step.
  • Choose “AWS Lambda ARN” as your “Service Endpoint Type”.
  • Click “North America” in the region section, and paste your ARN into the text box that appears.

You’ve now done enough to test your skill! I recommend using the “Test” step of the skill wizard to be able to type out a text utterance and see the response your skill makes. If you have an Echo Dot or Echo device connected to your developer account, you’ll also be able to use the device to test things out, like I did in the video above.

It’s helpful to log and view information about each request, if they’re failing. A simple “print()” statement anywhere in your Lambda code will allow you to output debug information to Cloud Watch, which you can review from the “Monitoring” tab in your Lambda function.

Have fun!

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Pretty cool.
      So, Alexa reads the same OData that serves the Fiori app.
      Next step OData->IFTTT/NodeRed->Alexa


      Author's profile photo James Grills
      James Grills

      Developing an Alexa skill is really a no brainer and the whole credit should go to Amazon. They have made it possible through Amazon Web Services and providing a platform for skill development. And using it is so much easy and convenient as well.

      This is one of the reasons why Alexa has a much promising future as compared to other digital assistants. It can be self trained to carry out so many functions that you can't even think about it for applying in other assistant softwares. This is what differentiates Amazon from all other companies.