BROWSE COLLECTIONS
DISCOVER
MY BOOKING MY BOOKING
LOGIN
CONCERTS FESTIVALS SPORTS NIGHTLIFE THEATER

Designing For AI Enabled Audio IoT: A Case For Performing At The Edge

Tuesday, 28 May 2019 @ 6:30 PM Past Event
Designing for AI enabled Audio IoT: A case for performing at the Edge
{"https:\/\/d2gbxgj0zxdpzt.cloudfront.net\/710_c-2.jpg":"@boetter^:^http:\/\/www.flickr.com\/photos\/jakecaptive\/414691892\/^:^https:\/\/creativecommons.org\/licenses\/by\/2.0\/deed.en"}
Photo: @boetter
Abstract:

Once confined to cloud servers with practically infinite resources, machine learning is moving into edge devices for various reasons including lower latency, reduced cost, energy efficiency, and enhanced privacy. The time needed to send data to the cloud for interpretation could be prohibitive, such as pedestrian recognition in a self-driving car. The bandwidth needed to send data to the cloud can be costly, not to mention the cost of the cloud service itself, such as speech recognition for voice commands.

Energy is a trade-off between sending data back and forth to server vs. localized processing. Machine learning computations are complex and could easily drain the battery of an edge device if not executed efficiently. Edge decisions also keep the data on-device which is important for user privacy, such as sensitive emails dictated by voice on a smartphone. Audio AI is a rich example of inference at the edge; and a new type of digital signal processor (DSP) specialized for audio machine learning use-cases can enable better performance and new features at the edge of the network.

Once an edge device is enabled for always-on audio machine learning, it can do more things than speech recognition at low power: contextual awareness such as whether the device is in a crowded restaurant or busy street, ambient music recognition, ultrasonic room recognition, and even recognizing whether someone nearby is shouting or laughing. These types of features will enable new sophisticated use cases that could improve the edge device and benefit the user.

Biography:

Jim Steele is the VP of Technology Strategy at Knowles Intelligent Audio. He has a track record of leading successful development of machine learning algorithms, software, hardware, and system engineering for mobile and IoT products. He joined Knowles through acquisition and prior to that led his motion sensor startup to a successful acquisition as well. Jim has held senior management positions at Spansion, Polaris Wireless, and ArrayComm working on a variety of complex systems from audio solutions to location-based technology. He has held research positions in theoretical physics at Massachusetts Institute of Technology and the Ohio State University. He is the lead author of The Android Developers Cookbook, which was designed to help application developers start working on the Android mobile operating system. He is also a noted speaker and has given many invited lectures. Jim holds a Ph.D. in theoretical physics from the State University of New York at Stony Brook.

Agenda:

6:30 pm 7:00 pm: Registration, Food, Networking

7:00 pm 8:00 pm: Talk

8:00 pm 8:30 pm: Q&A and Networking

Admission Fee: Open to all to attend (Please register in advance. If you cannot register in advance, you can still show up at the door, but seating is not guaranteed. Please allow extra time for NVIDIA security sign in.)IEEE CES members freeIEEE Student members freeIEEE members $5 (pay at door)non-members $10 (pay at door) You do not need to be an IEEE member to attend!(If you wish to be a member of IEEE, click here)
Have an issue with this listing? Report it here.
0
0
X
playlist Close
arrow
Click
- Playlist
Click Click
Click