Categories
Artificial Intelligence Cloud

Generative AI in Education With AI-x-plainer and Amazon PartyRock

Working as a professor at the St. Pölten University Applied Sciences, I’ve observed the dual impact of generative AI in education. In this article, I’ll discuss how we are adapting teaching methodologies.

The result is the AIxplainer educational tool, which you can freely test and remix through Amazon PartyRock.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Integrating AI in Education

Traditional coding instruction is becoming less effective as AI tools streamline code creation. However, constructing extensive system architectures and corresponding code predominantly remains a human task.

Thus, we need an educational approach where students engage with AI to address advanced challenges that extend beyond what is typically explained within lectures.

Categories
App Development Artificial Intelligence Speech Assistants

Local Debugging of Alexa Skills with Visual Studio Code

Creating an Alexa-hosted skill is a fantastic way to start developing for voice assistants. However, you will eventually face issues that you need to debug in code. Alexa offers local skill debugging through Visual Studio Code but setting it up is a bit tricky. This guide will take you through the necessary steps.

Skill Environment

This guide focuses on a Python-based skill and uses Windows as a local dev environment. Most also applies to other environments.

I’ll start with a blank skill. First, create the skill in the Alexa Developer Console. The skill name I’m using in this example is “local debugging test”. The “type of experience” is “Other”, with a “Custom” model, as I’d like to start with a minimal blank skill. In the “Hosting services” category, choose the “Alexa-hosted (Python)” category. In the last step about templates, stick with “Start from Scratch”, which will give you a minimal Hello World-type voice interaction. The following screenshot summarizes the settings:

Review of the settings for the new Alexa skill that we will configure for local debugging through Visual Studio.
Categories
App Development AR / VR Cloud Speech Assistants

How-To: Convert Neural Voice Audio from Amazon Polly (mp3) to Spark AR (m4a)

Currently, Facebook’s Spark AR Studio is restrictive with supported audio formats. Unfortunately, only M4A with specific settings is allowed. This short tutorial is a guidance on how to convert artificially generated neural voices (in this case coming from an mp3 file as produced by Amazon Polly) to the m4a format accepted by Spark AR. I’m using the free Audiacity tool, which integrates the open-source FFmpeg plug-in.

Spark AR has the following requirements on audio files:

  • M4A format
  • Mono
  • 44.1 kHz sample rate
  • 16-bit depth

Generating Audio using Text-to-Speech (mp3 / PCM)

Neither Amazon Polly nor the Microsoft Azure Text-to-Speech cognitive service can directly produce an m4a audio file. In its additional settings, Polly offers MP3, OGG, PCM and Speech Marks. MP3 goes up to a sample rate of 24000 Hz, PCM is limited to 16000 Hz.

Categories
Android AR / VR Digital Healthcare

Enlightening Patients with Augmented Reality

In a recent research project, we researched possibilities for interactive storytelling, usability, and interaction methods of an Augmented Reality app for patient education. We developed an ARCore app with Unity that helps patients with strabismus to better understand the processes of examinations and eye surgeries. Afterwards, we performed a 2-phase evaluation with a total of 24 test subjects.

We published the results at the IEEE VR conference. The peer-reviewed paper is available through the open access online proceedings or on ResearchGate.

A brief overview of the main findings:

Health Literacy and Education

Low health literacy is a well-known and serious issue. 1 in 5 American adults lack skills to fully understand implications of processes related to their health . Audio and computer-aided instructions can be helpful. Especially spoken instructions lead to a higher rate of understanding . A smartphone app that combines multiple approaches can therefore provide great benefits.

We developed and evaluated a prototype Augmented Reality (AR) mobile application called Enlightening Patients with Augmented Reality (EPAR). The app is designed for patient education about strabismus and the corresponding eye surgery. It is intended to be used in addition to the doctors’ mandatory consultations.

Categories
Digital Healthcare Speech Assistants

Top New Alexa Skills by Students

In the “rapid prototyping” lecture of the degree program Digital Healthcare at the St. Pölten University of Applied Sciences, students faced a unique task: after just a brief introduction to voice design and speech assistants, their assignment was to create and publish an Alexa skill or Google Assistant Action.

The topic was free to choose and up to the creativity of the students. Their creation had to pass the manual skill certification process performed by Amazon. This means that they didn’t have to just develop the skill, but also provide all required metadata like description and icons.

As a development tool for prototyping, we decided to use Voiceflow. It proved to be easy to use and extremely quick to achieve results already in our Alexa for Wellbeing Online Challenge.

Top Alexa Skills by the Students

In total, 14 skills have been developed and published by 14 students. Here, I’d like to highlight a few of the skills that I found especially interesting. Most of these are available in German only.

Cat Quiz

Categories
Android App Development AR / VR

2D Image Tracking with AR Foundation (Part 4)

With 2D image tracking, you can create real-life anchors. You need pre-defined markers; Google calls the system Augmented Images. Just point your phone at the image, and your app lets the 3D model immediately appear on top of it.

In the previous part of the tutorial, we wrote Unity scripts so that the user could place 3D models in the Augmented Reality world. A raycast from the smartphone’s screen hit a trackable in the real world, where we then anchored the object. However, this approach requires user interaction and a good user experience to guide users, especially if they’re new to AR.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Using 2D Image Tracking

You need to provide reference images, which your app’s users will then encounter in the real world. AR Foundation distinguishes these images and tracks their physical location.

Some usage scenarios where 2D image tracking is helpful:

  • Recognition of real-world objects
  • Automatically place information on top of objects
  • Create an indoor info or navigation system
  • Often quicker & easier than plan detection
Categories
Android App Development AR / VR

Raycast & Anchor: Placing AR Foundation Holograms (Part 3)

In the first two parts, we set up an AR Foundation project in Unity. Next, we looked at to handle trackables in AR. Now, we’re finally ready to place virtual objects in the real world. For this, we perform a raycast and then create an anchor at the target position. How to perform this with AR Foundation? How to attach an anchor to the world or to a plane?

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

AR Raycast Manager

If you’d like to let the user place a virtual object in relation to a physical structure in the real world, you need to perform a raycast. You “shoot” a ray from the position of the finger tap into the perceived AR world. The raycast then tells you if and where this ray intersects with a trackable like a plane or a point cloud.

A traditional raycast only considers objects present in its physics system, which isn’t the case for AR Foundation trackables. Therefore, AR foundation comes with its own variant of raycasts. They support two modes:

Categories
Android App Development AR / VR

Trackables and Managers in AR Foundation (Part 2)

After setting up the initial AR Foundation project in Unity in part 1, we’re now adding the first basic augmented reality features to our project. How does AR Foundation ensure that your virtual 3D objects stay in place in the live camera view by moving them accordingly in Unity’s world space? AR Foundation uses the concept of trackables. For each AR feature you’d like to use, you will additionally add a corresponding trackable manager to your AR Session Origin.

Trackables

In general, a trackable in AR Foundation is anything that can be detected and tracked in the real world. This starts with basics like anchors, point clouds and planes. More advanced tracking even allows environmental probes for realistic reflection cube maps, face tracking, or even information about other participants in a collaborative AR session.

Trackable managers available in AR Foundation.
Trackable managers available in AR Foundation.

Each type of trackable has a corresponding manager class as part of the AR Foundation package that we added to our project.

Categories
Android App Development AR / VR

AR Foundation Fundamentals with Unity (Part 1)

When developing mobile Augmented Reality apps, you usually want to target both Android and iOS phones. AR Foundation is Unity’s approach to provide a common layer, which unifies both Google’s ARCore and Apple’s ARKit. As such, it is the recommended way to build AR apps with Unity.

However, few examples and instructions are available. This guide provides a thorough step-by-step guide for getting started with AR Foundation. The full source code is available on GitHub.

AR Foundation Architecture and AR SDKs

To work with AR Foundation, you first have to understand its structure. The top layer of its modulare design doesn’t hide everything else. Sometimes, the platform-dependent layers and their respective capabilities shine through, and you must consider these as well.

AR Foundation is a highly modular system. At the bottom, individual provider plug-ins contain the glue to the platform-specific native AR functionality (ARCore and ARKit). On top of that, the XR Subsystems provide different functionalities; with a platform-agnostic interface.

Categories
Speech Assistants

Quick Hack: Random Dialog Paths in Voiceflow

In dialog trees for voice assistants, you often need to introduce some randomness. If the smart speaker doesn’t always repeat the same phrases, the dialog sounds more natural. Many other use cases exist as well, e.g., you might want to ask the user a random question in a quiz.

Random Block in Voiceflow

To enable this functionality, Voiceflow includes a “Random” block. This enables choosing a different path each time. The “no duplicates” option ensures that it’s not going the same path twice.

However, while this works fine in the Voiceflow testing environment, it currently has issues when using the skill live on Amazon Alexa. Additionally, you might sometimes want to have more control over the process – e.g., pre-set the random choices, store them in a database for advanced logging or tease the next item when the skill ends.