In June 2017, quickly after Misty Robotics spun out of Boulder, Colorado-based start-up Sphero, it announced objectives to establish a “mainstream” home robotic with the aid of hobbyists and enthusiasts the world over. With $115 million in endeavor capital from Venrock and Foundry Group in the bank, it squandered no time at all in getting to work, revealing an advancement platform called Misty I at the 2018 Consumer Electronics Show. A few months later on, Misty took the wraps off the second iteration of its robot– Misty II— and made 1,500 systems offered for preorder.

It’s been a long time coming, but following a successful Kickstarter campaign in which Misty raised simply short of $1 million, the company revealed this week that it’s started delivering Misty II systems to 500 early backers. Along with the hardware, the start-up says it’ll quickly openly introduce its JavaScript-based software development package, which will consist of a Visual Studio Code extension and API explorer in addition to samples, paperwork, and command center and ability runner web user interfaces.

For brand-new customers, Misty II is now available starting at $2,399(a 25%discount rate off of MSRP) ahead of an official market launch later this year.

” Delivering Misty II to our crowdfunding backers is a major milestone for the company as they will play a special role in assisting us prepare Misty for her main market launch later this year,” said Misty Robotics founder and head of item Ian Bernstein. “Our backers are financiers in the vision of individual robotics coming true in our lives. We are extremely delighted to see how numerous designers bring Misty to life.”

The Misty II robot for developers.

Above: The Misty II robotic for developers.

Image Credit: Courtesy Misty Robotics

For the inexperienced, the slightly humanoid Misty II weighs in at 6 pounds, stands 14 inches high, and packs electronic devices like a 4K Sony camera, a 4.3-inch LCD display, twin chest-mounted speakers, eight time-of-flight sensors, and 3 Qualcomm Fluence Pro– powered far-field mics. An Occipital sensing unit array attached to its “forehead” sports a 166- degree wide-angle video camera and IR depth sensors, making it possible for simultaneous localization and mapping (Occipital’s Bridge Engine manages the spatial computing bit). And on its back sits a module compatible with advancement boards like the Raspberry Pi 4 and Arduino Uno.

Misty II’s head– which sits on a “neck” with three degrees of flexibility (3DoF)– has capacitive touch sensors for extra controls, and a flashlight embedded near the right “eye.” There’s a pair of concealed chipsets (a Qualcomm Snapdragon 820 and 410) that perform heavy computational lifting, and a swappable panel that plays perfectly with various camera types, laser tips, and other third-party sensing units and controls.

The sensing units work in tandem to guide Misty II back to its consisted of charging station, even in the dark. All 4 corners of the base have time-of-flight bump sensing units so it does not encounter things and obstacles or fall off of things like coffee tables. While the arms do not do anything, they’re developed to be extensible, so that designers can switch them out for things like cupholders.

On the software side of the equation, Misty II’s operating system is Windows IoT Core and Android 8 Oreo, the latter of which provides navigation and computer system vision. Misty II deals with third-party services such as Amazon’s Alexa, Microsoft’s Cognitive Services, and the Google Assistant, and it allows owners to create custom-made programs and routines, consisting of ones that take advantage of device learning frameworks like Google’s TensorFlow and Facebook’s Caffe2.

Misty says that over the past few months, developers with early access have begun to imbue Misty II with facial recognition, robust locomotion, and over 40 “eyes” and more than 80 sounds. Furthermore, it states that early consumers are developing abilities for inventory information collection, house residential or commercial property evaluation, ecological tracking, spatial data collection, eldercare, autism therapy, and personal engagement.

Misty isn’t precisely rushing to market– it has a 10- year plan, and it’s taking a hands-on method to development. While a couple of preprogrammed abilities (like autonomous driving and voice recognition) are available on GitHub, the concept is to let developers create use cases that the starting group might not have actually considered.

That stated, Misty II will not be bereft of abilities out of package. Here’s which will be offered:

  • Facial detection and recognition
  • Mobile sound localization
  • Image and graphic display screen
  • Audio playback
  • Consecutive and one-time photo capture
  • Audio recording
  • Wake word (Misty II can be woken with the phrase “Hey, Misty”)
  • Raw sensor gain access to
  • Programmable personality
  • Ability sharing via the Misty community forum and GitHub

Features coming soon consist of video capture as much as 10 seconds and 3D mapping combination.