Designing for Democratization: Introducing Novices to Artificial Intelligence Via Maker Kits

Designing for Democratization: Introducing Novices to Artificial Intelligence Via Maker Kits

Paper Preprint
Victor Dibia IBM Research1101 Kitchawan RoadYorktown HeightsNew York10598 Maryam Ashoori IBM Research1101 Kitchawan RoadYorktown HeightsNew York10598 Aaron Cox IBM Research1101 Kitchawan RoadYorktown HeightsNew York10598  and  Justin Weisz IBM Research1101 Kitchawan RoadYorktown HeightsNew York10598

Existing research highlight the myriad of benefits realized when technology is sufficiently democratized and made accessible to non-technical or novice users. However, democratizing complex technologies such as artificial intelligence (AI) remains hard. In this work, we draw on theoretical underpinnings from the democratization of innovation, in exploring the design of maker kits that help introduce novice users to complex technologies. We report on our work designing TJBot: an open source cardboard robot that can be programmed using pre-built AI services. We highlight principles we adopted in this process (approachable design, simplicity, extensibility and accessibility), insights we learned from showing the kit at workshops (66 participants) and how users interacted with the project on GitHub over a 12-month period (Nov 2016 - Nov 2017). We find that the project succeeds in attracting novice users (40% of users who forked the project are new to GitHub) and a variety of demographics are interested in prototyping use cases such as home automation, task delegation, teaching and learning.

Artificial Intelligence, Maker Kits, Democratizing AI, Internet of Things
copyright: rightsretaineddoi: 10.475/123_4isbn: 123-4567-24-567/08/06conference: ACM Conference; Month 2018; City, Countryjournalyear: 2019article: 4price: 15.00ccs: Human-centered computing User interface toolkits

1. Introduction

Figure 1. (a) Kit cardboard (chipboard) and components, pre-assembly. (b) Fully assembled kit. (c) Example of how some kit components can be combined with AI services to create capabilities.
Figure 2. (a) Exploded view of the maker kit components (1) Left foot (2) Right foot (3) Camera (4) Bottom retainer (5) Top retainer (6) LED retainer (7) Right leg (8) Left leg (9) Raspberry Pi (10) Microphone (11) LED (12) Speaker (13) Camera braces (14) Leg brace (15) Jaw (16) Servo Motor (17) Arm (18) Head (b,c) Assembled maker kit with head removed, front and rear view.

Buoyed by recent advances in machine learning, the general field of AI is well positioned to solve problems across diverse domains and has been referred to as the most important general-purpose technology of our era (Brynjolfsson and Mcafee, 2017). Speedy declines in the error rates for perception (e.g. speech recognition, image recognition) and cognition (learning to play complex games such as AlphaGo) tasks performed by AI systems herald an era where machines match and outperform their humans counterparts(He et al., 2016; Lake et al., 2015; Mnih et al., 2015). In the most successful forms of its applications, domain experts (medicine, chemistry, art etc.) collaborate with AI experts or leverage AI tools in crafting AI powered solutions. Some examples of such collaborations include AI applied to medical imaging and diagnosis (Beck et al., 2011; Kobayashi et al., 1996; Vyborny and Giger, 1994), AI applied to chemical search problems (Gómez-Bombarelli et al., 2016; Ma et al., 2015; Natarajan and Behler, 2016; Sendek et al., 2017) and AI applied to autonomous vehicle design (Bertozzi et al., 2002).

While this collaborative model holds promise in further expanding the impact of AI, it is limited by two factors. The first challenge is related to scale. As a growing discipline, there are not enough AI experts available to collaborate with experts across all other domains. Secondly, users without AI expertise may perceive AI, a STEM field, to be complex (Lyons and Beilock, 2012; Chilana et al., 2015; Heilbronner, 2011; Nix et al., 2015) and unapproachable, further deterring them from applying AI to solve their domain problems. These users may consist of students interested in learning, professionals outside the computer science domain (e.g. sales, marketing, medical, chemistry etc.) or individuals within computer science experts in domains other than AI (e.g. database, web and mobile software engineers). Interestingly, while many software developers rank AI as an area of interest, only a few already have the required skill. A recent large scale survey (Stackoverflow, 2018) of 101,592 software developers from 183 countries showed that only 7.7% of developers identified as having skills relevant to AI (data science and machine learning) and ranked AI tools as the third most desired . Taken together, these challenges necessitate approaches that help on-board and introduce more user groups to AI.

While existing HCI studies have examined the general problem of understanding and supporting a spectrum of novice programmers (Du Boulay et al., 1992; Chilana et al., 2015; Chilana et al., 2016; Kelleher and Pausch, 2005; Myers and Ko, 2009), most of this work explores the approach of simpler programming language specifications (Kelleher and Pausch, 2005) and the use of visual programming paradigms (Resnick et al., 2009) in supporting learning goals for users. There is opportunity to systematically enrich the body of HCI theory and design practices through a focus on designing toolkits that complex technology like AI more accessible , as well as studying their impact and limitations in the wild. To address this gap, we created TJBot 111For the remainder of the paper we also refer to the assembled kit as a robot or bot. - an open source maker-kit designed to make AI approachable and enable users to easily prototype applications with an embodied agent, using pre-built AI services (e.g. speech to text, text to speech, dialog and conversation, natural language processing, tone analysis, language translation etc.). The physical embodiment for TJBot (see Figure Figure 1) can be built from a piece of laser-cut cardboard (see Figure 1a) or 3D printed (see Figure 3c) and contains off-the-shelf electronic components such as a Raspberry Pi, a speaker, servo, microphone, camera and LED. As part of the kit, we released sample code that demonstrates how users can easily combine AI services with the hardware on the kit to create capabilities (see Figure 1) such as allow the bot to speak, listen, hold multi-turn conversations, see, translate text, respond to emotion in spoken words etc. These capabilities can then be integrated to build higher level use cases (e.g. a storytelling robot for kids, an emotional companion, a sign language translator using computer vision etc.). Our hypothesis is that by exploiting learning and engagement benefits (Kuznetsov et al., 2011; Rode et al., 2015; Peppler and Glosson, 2013; Somanath et al., 2017) of maker kits, as well as design principles that make technology approachable, we can create tools that make AI approachable to novice users and support its application in prototyping solutions.

We acknowledge that democratization is a multifaceted concept with differing implications for different user groups. In this work, our scope of democratization refers to efforts that help make AI more accessible to novices and enable its creative application in problem solving. As opposed to enabling the creation of complex AI models (e.g. design of novel neural network architectures), the goal is to familiarize users with AI and support its application in a set of problem domains. The target audience for the current iteration of our maker kit are individuals with some basic knowledge of computing concepts but who have no prior experience with AI. This group includes makers 222Makers: Individuals familiar with hardware prototyping and basic programming. AI can serve to enable natural interactions and automation for their projects., software developers (e.g. web developers, mobile app developers, database developers, user experience designers etc.) and students.

To understand the impact of the kit in the wild, we adopt a multi-method strategy and report on our findings over a 12-month period (Nov 2016 - Nov 2017). We pursued an iterative design process that included brainstorming sessions, a design probe session with developers of various skills, and publishing sample code for our project hosted on GitHub. We then conduct several workshops (66 participants in total) and demonstrated the kit (informal interviews) at exhibition booths at two large conferences. This work makes contributions to the area of maker kits and their value in designing for democratization emerging technology: (i) we provide one of the first detailed accounts of a design attempt to democratize AI using maker kits, and the strategies pursued in doing so. Our design guidelines show how to design maker kits that are cheap and easy to use, but highly functional. (ii) we highlight specific use cases which users were interested in exploring with the maker kit (home automation, task delegation, teaching and learning) specific use behavior (remixing, individual vs group use), and the impact of a visualization interface on user interaction. (iii) we provide an analysis on user interaction with the sample code we provide to scaffold the learning process. The remainder of the paper is organized as follows: in the next section, we discuss theoretical background for this work - democratizing innovation, maker kits and AI. This is followed by a description of our design process, a description of the components of the kit, and a report on insights from data collected at workshops as well as user interaction data from GitHub. We conclude the paper with a discussion of our findings, how they translate to design recommendations for building similar kits amongst other insights.

2. Background

AI Task Description Example Tools Skill
Creating AI Low level optimization, numeric computation Python, Numpy High
AI model creation, training, and debugging Tensorflow, Theano, PyTorch, MXNet, Caffe, CNTK High
High Level AI model design Keras, Gluon,, Chainer, Watson Studio, AutoML Medium
Applying AI Applying prebuilt models (vision, language, etc) Google, IBM, Microsoft, Amazon, Clarifai Medium
High-level exploration, embodied prototyping TJBot Low
Table 1. Tools that support the democratization of various AI tasks and an estimate of required AI skill. S

In this section, we discuss related work that underpin this research - democratizing innovation via toolkits, maker cultures and maker kits, and AI.

2.1. Democratizing Innovation via toolkits

As new technologies emerge, an important aspect of their long-term success is the degree to which they can be applied to the specific needs of diverse user groups. This vital but challenging component of the innovation process has been described as need-related innovation (von Hippel, 2006, p.147). To address this, research studies have emphasized the user-innovation approach (Franke et al., 2006; von Hippel, 2005; Morrison et al., 2000), in which companies partner with and co-create with users. This approach yields several benefits. First, it enlists a diverse array of external partners, each of whom has a deep understanding of a given usage context and enables the generation of a diverse set of ideas. Next, it enables firms to focus their efforts on identifying highly engaged users (Hippel, 1986) and working with them to generate and test concepts (Urban and von Hippel, 1988) . User innovation is attractive as it has been shown to enable faster production and reduced costs relative to sole reliance on internal R&D efforts (von Hippel, 2006; von Hippel and Katz, 2002, p.148). Given the value of user innovation, effort has been made to understand approaches to enabling user co-creation using toolkits. von Hippel (2001) (von Hippel and Katz, 2002) has discussed the emergence of such innovation toolkits for product-design, prototyping and design-testing tools. These toolkits are intended to enable non-specialist users create high quality solutions that meet their specific needs (von Hippel, 2006), and thus democratizes the innovation process. To support innovation, von Hippel (von Hippel and Katz, 2002) argues that these toolkits must possess several attributes: support complete cycles of trial and error learning; offer a broad solution space for creativity; offer a friendly user interface; contain reusable modules that can be integrated into designs; and support creation of designs that can be reproduced at scale. In this work, we adopt the tenets of user-innovation and innovation toolkits and develop a kit that enables user co-creation within the area of AI. We extend this notion by designing for novice users (as opposed to expert lead users), including elements of constructionism (maker kits) to aid the learning process, with the goal of enabling creativity with a complex tool such as AI.

2.2. Maker Cultures and Maker kits

2.2.1. Making and Engagement

In recent years, the culture of making (also referred to as the maker movement, Do-It-Yourself (DIY) culture) has moved from being a niche or hobbyist practice to a professional field and emerging industry (Ames et al., 2014; Lindtner et al., 2014). It has been defined broadly as ”the growing number of people who are engaged in the creative production of artifacts in their daily lives and who find physical and digital forums to share their processes and products with others.” (Halverson and Sheridan, 2014). Proponents of maker culture such as Chris Anderson (Anderson, 2012), distinguish between the maker movement and work by inventors, tinkerers and entrepreneurs of past eras by highlighting three characteristics: a cultural norm of sharing designs and collaborating online, the use of digital design tools, and the use of common design standards that facilitate sharing and fast iteration (Anderson, 2012). Research in this area has sought to understand the formation of online maker communities, (Buechley and Eisenberg, 2008; Buechley and Hill, 2010; Kuznetsov and Paulos, 2010) as well as characterize the dominant activities, values and motivations of participants. Perhaps the most relevant aspect of maker cultures and maker communities to our present study is related to the motivations and ethos observed within these communities. Participants have been described as endorsing a set of values such as emphasizing open sharing, learning, creativity over profit and engendering social capital (Kuznetsov and Paulos, 2010). Makers have also been described as participating for the purpose of receiving feedback on their projects, obtaining inspiration for future projects and forming social connections with other community members (Kuznetsov and Paulos, 2010). In general, maker culture promotes certain ethos and cultural tropes such as ”making is better than buying” (Tanenbaum et al., 2013, p.2604) and has been known to build and reinforce a collective identity that motivates participants to make for personal fulfillment and self-actualization (Somanath et al., 2017). Taken together, these motivations and behaviors suggest maker culture encourages an overall intrinsic motivation approach valuable in addressing known engagement problems (Heilbronner, 2011) associated with effortful learning tasks.

2.2.2. Maker kits and Learning

Maker culture, much like user innovation, has adopted the use of toolkits that support problem solving and fabrication but with a focus on their learning benefits. Early work by Harel and Papert (Harel and Papert, 1991) introduced the theory of constructionism which emphasizes embodied production-based experiences as the core of how people learn. Building on this, learning support tools have been designed that allow for digital construction such as the Logo programming language (Papert, 1980), the Scratch programming language(Resnick and Silverman, 2005) and physical hands-on construction such as the LEGO Mindstorms kits (Resnick et al., 1988), the LilyPad (Buechley and Eisenberg, 2008; Buechley and Hill, 2010), and EduWear (Katterfeldt et al., 2009) and MakerWear (Kazemitabaar et al., 2017). Each of these tools (also referred to as maker kits) emphasize learning through making and have shown promise in empowering users to create self-expressive and personally meaningful designs (Kafai et al., 2014b), improving the perception of computing (Kafai et al., 2014a) and introducing new user groups to an otherwise inaccessible technology or learning experience (Buechley and Hill, 2010; Mellis et al., 2016). The making approach has been cited for its potential to democratize technology, improve workforces, improve engagement and participation in education (Kuznetsov et al., 2011; Rode et al., 2015; Peppler and Glosson, 2013; Somanath et al., 2017), empower consumers and contribute to the economy (Ames et al., 2014; Sivek, 2011).

2.3. Democratizing AI

The research field of AI originates from early efforts to simulate aspects of human intelligences using machines (McCarthy et al., 1955). The domain draws on advances in machine learning algorithms which allow machines reason, learn, recognize patterns and understand natural language in manners similar to the human brain. In recent times, AI has shown potential in solving challenging problems across multiple domains such as enabling self-driving cars (Bertozzi et al., 2002), and medicine (Beck et al., 2011; Kobayashi et al., 1996; Vyborny and Giger, 1994) etc. To scale the impact of AI, research and industry stakeholders have begun to explore approaches that help democratize AI by supporting two types of tasks - efforts in creating AI and efforts in applying AI. Table 1 provides a summary of tools that make these tasks more accessible to users and skill requirements for each task. Creating AI entails the use of low level numeric computation functions which are then used to create, train and debug AI models. These tasks are supported by low level programming frameworks such as Tensorflow(Abadi et al., 2016), Caffe(Jia et al., 2014), Theano(Bergstra et al., 2010), PyTorch, MXNet, CNTK, and high level model design frameworks such as Keras(Chollet, 2015), Gluon, Chainer,, IBM Watson Studio and Google AutoML. Typically, the task of creating AI requires high skillset spanning programming, mathematics, statistics and optimization domains. To reduce the complexity associated with applying AI in solving problems, AI models are now increasingly offered as black-box cloud hosted services (Spohrer and Banavar, 2015). These services remove the complexity of designing, training and testing AI models by providing ready to use models which can be accessed over api endpoints. Examples include AI services offered by large technology companies such as IBM, Google, Amazon, Microsoft, Clarifai etc.

While being of immense value, these frameworks and services still require considerable skill, and may be unapproachable to non-technical users. In this work, we build on these efforts to democratize AI, with a focus on supporting individuals interesting in applying AI (high level exploration and prototyping). To this end we provide a maker kit and a software library which integrates cloud hosted services with hardware functions on the kit and supports rapid prototyping of use cases.

2.4. Summary

While HCI and related fields have been concerned with the emergence and impact of maker culture, construction kits, and how they influence learning, very little has been done to explore how same culture can be leveraged for introducing and socializing complex technology. In addition, while maker culture has generally been studied in HCI as an emergent area that can both inform and be informed by design research (Ames et al., 2014; Tanenbaum et al., 2013), little work has examined its applicability in democratizing complex technology on a large scale. Thus, we argue that attributes of maker culture and maker kits position it as a suitable vehicle for engaging new (and novice audiences) as well as socializing complex technology. The strong cultural tropes associated with maker culture can be powerful sources of motivation and the use of a construction kit can serve to support learning for various user groups.

3. Design and Strategy

Figure 3. Visualization interface screenshot (a) Audio transcripts, intent confidence values and responses for two conversation interactions (b) Input and response from computer vision service (c) A 3D Printed bot.

We follow an iterative design process consisting of brainstorming sessions, and development of an initial prototype.

3.1. The Kit: Brainstorming and Early Designs

We engaged in structured brainstorming sessions with 4 team members where we first identified specific AI technologies (speech to text, text to speech, dialog and conversations, computer vision, sentiment analysis and natural language processing) and explored the design of maker kits that engaged a user in prototyping tangible use cases for each. Using low-fidelity mock-ups of each of these kits and an early prototype, we proceeded to evaluate aspects of user experience such as ease of assembly, how easy it was to setup and run our sample programs (learneability) and how easy it was to modify these programs (remixing). Our experience with this first round of brainstorming and prototyping is summarized below: • (i) To keep the kit simple, we initially began with a focus on plain cardboard as the construction material. We found that the structural integrity of plain cardboard was deformed due to the weight of components like the raspberry pi and a speaker. To address this, experimented with other materials such as corrugated cardboard sheets and finally settled for the more dense chipboard material (see Figure 1). (ii) Our earlier prototype designs utilized alligator clips (connecting the LED and servo motor) which would occasionally get loose, prompting us to replace these with LED and cables that had male/female connectors which could easily be plugged together. (iii) Resource constraints limited our ability to produce multiple form factors especially at the early stage of the project prompting us to pursue a single form factor strategy and focus on varying the software component to reflect integrations with AI services.

3.2. Design Probe with Developers

Following the completion of our first prototype (Dibia et al., 2017), we created sample code that showed how to program the kit which was then shared on GitHub, an open-source code distribution platform, as well as documentation on how to obtain and assemble the kit hardware. We then showed the kit at a booth within a developer conference where we recruited participants (n=30), and conducted informal semi-structured interviews about the appearance of the kit, its functionality and their overall reaction to demonstrations of the kit. Each of these developers were later sent an early version of the kit, and we monitored GitHub for any issues or feedback they had. While we obtained overall positive feedback during our informal interviews - they felt the kit had an appropriate level of hardware complexity, appeared easy to assemble and felt the sample code was easy to navigate, they had challenges in adapting our sample code to new usecases. We found that users posted issues related to errors when making connections to the prebuilt AI service endpoints, challenges with managing voice interaction context (e.g deciding when to listen or speak during an interaction) etc. While solutions to these challenges are readily obvious to experienced software engineers, many of our users where either novices or were uninterested in solving technical problems unrelated to their primary use cases. To address these challenges, we created a software library api (TJBotlib) which encapsulates capabilities that were complex (usually required a connection to one or more AI services), frequently used and prone to error. This way users could focus on creativity and write significantly less code to realize their use case ideas (see Figure 4. We also observed that it was challenging for users being shown a demonstration of the kit to respond to changes in the state of the bot and understand how different cognitive services enabled its capabilities. For example users would frequently ask ”what did it hear?”, ”which of my comments is it responding to?”, ”why does it give this response? Its not correct”. To address this, we began designing a user-friendly dashboard interface (see Figure 3) that would visualize interactions with the bot and make the bot’s activities more transparent to the user. Reactions to this interface are reported in the findings section of the paper.

3.3. The Kit: Emergent Design Principles

While the design process began with some fuzzy principals in mind, it is most accurate to say that clear principals gradually emerged as the team worked through design ideas, embodied them in mockups, and developed and received critiques of the ideas. The following principals are synthesized and offered as starting points - to be modified by the particulars of the technology and the envisioned dissemination strategy - for others pursuing similar ends. Approachable design: The goal within this design strategy is to ensure the design of the kit inspires engagement. In this sense, the kit should be ”tinkerable”, offer obvious cues as to its capabilities as well as invite a user to further discover its capabilities. To achieve this, we focused two aspects - build material and form factor. We selected cardboard, a material that most individuals are familiar with, as the build material for the kit. Next, we chose a humanoid form factor, a square shaped robot appearance with features such as ”eyes” ”a jaw” and an ”arm”. To an extent, these features are designed to cue users in the right direction regarding the robot’s AI-powered capabilities such as the basic ability to ”see”, ”speak” and ”listen”. Simplicity (Low Floors, High Ceiling, Wide Walls): The design goal pursued here is to design a kit that is easy to both physically assemble and program for novice users (low floors) yet functional to support experts in the creation of increasingly complex and meaningful use cases (high ceilings). In this sense, the kit should support a wide range of user skill levels starting from novice users (e.g. school students) to experienced developers or hardware enthusiasts. This design goal is also related to the ”wide walls” approach described by Resnick et al (Resnick and Silverman, 2005) where they emphasize that construction kits (for kids in their case) should also support a wide range of different explorations (wide walls). The expectation is that this diversity of possibilities will allow for the constructions of unique creations that surprise both the user (and the creators of the kit) and inspire that sense of infinite possibilities necessary for sustained engagement. To achieve this goal, we focused on two aspects kit assembly and programming. We chose an assembly approach that allowed users to build the kit by folding or snapping parts without the need for any soldering, tape or adhesive. This approach enables us to navigate the complexities of blending the low-tech qualities of cardboard material with the high-tech requirements of electronics and cognitive services. We also selected JavaScript, one of the most widely used programming languages, as the primary coding language used to program the kit. Specifically, the choice of Nodejs as the programming platform of choice builds on its modular package management infrastructure which makes it easy to find diverse software libraries that can be reused and remixed in many ways. Extensibility: The kit should allow for users to easily extend the capabilities by adding both hardware and software components that allow for the creation of sophisticated, multi-faceted designs as users gain experience. We selected the Raspberry Pi platform as the hardware platform of choice given its support for hardware extensions via general purpose input output (GPIO) ports and its Raspbian operating system that supports multiple programming languages and runtimes. Furthermore, the kit was introduced to the public as an open source project and its software components are hosted on the GitHub platform to allow for collaborative contribution. Accessibility: The kit was also designed to be relatively cheap, easily disseminated and widely available. This goal is reflected in the choice cardboard as the build material, and the use of non-proprietary hardware components that can be purchased off the shelf. The project is also open-sourced - the design files to create a laser-cut or 3D printed version (see Figure 2, Figure 3) of the kit’s embodiment as well as the sample codes and libraries needed to control its components are made available for users to download and modify. This goal is increasingly important for DIY kits as research has shown that in some cases, while maker culture strives to promote democratization of technology, resource constraints (custom components, expensive parts) make it remain an activity of privilege, accessible to only a select few (Tanenbaum et al., 2013, p.2605).

4. The System

// Instantiate tjbotlib library
var tj = new tjbot(["microphone", "speaker","led","camera"], {}, credentials);
// Transcribe audio using Speech-to-Text AI service
tj.listen(function(msg) {
    if( msg.toLowerCase() == "what do you see" ){
        //Process camera image to Computer Vision AI service
            description = "Objects I see are "             objects.forEach(function(object){
                description = description + " " + object
            }) // Synthesize audio using Text-to-Speech AI service, play using speaker
            // Change LED light to green.
Figure 4. Node.js code snippet shows how to prototype a voice command interaction using the TJBot library api.

4.1. Hardware

The external body of TJBot is built from laser-cut chipboard parts which can also be 3D printed (see Figure 2, Figure 3). The main electronic component is a Raspberry Pi hardware board[67] - an affordable yet functional credit-card sized computer that has become popular within the maker community for prototyping applications. The Pi is extensible and allows for the addition and control of multiple sensors and actuators via its (GPIO). The kit also contains the following components as labeled in Figure 2: (part 11) Addressable RGB LED light that can be used to represent the state of the bot and can be programmed to reflect the entire range of RGB color codes. (part 10) A USB microphone that allows the recording of speech which can then be transcribed and processed to understand commands. (part 16) Servo motor installed in the arm of the robot that allows for mechanical control of the robot’s arm. (part 12) Speaker box with support for both 3.5mm audio connection and Bluetooth (part 3) Camera (8 megapixels). Given that a primary focus of this work is designing to democratize AI services as opposed to robotics, the kit contains only a single mechanical piece.

4.2. Software

We selected JavaScript as the main programming language for TJBot given its ease of use and wide adoption (Meyerovich and Rabkin, 2013; Stackoverflow, 2018) by various developers. Specifically, we use Node.js, an open-source cross-platform run-time environment that supports server-side JavaScript development. The Node.js architecture is expressive, functional without sacrificing performance (Lei et al., 2014; Tilkov and Vinoski, 2010) and has a vibrant, community-maintained repository of 3rd party libraries. Programming TJBot entails 3 simple steps. First the user updates the operating system and their Node.js installation. Next, they create credentials that allow them access and integrate the cloud hosted services they are interested in prototyping with. They can then choose to download the sample code we provide from GitHub or write their own. We also provided a software library in Nodejs (TJBotlib) which encapsulates many functions for programming the kit into concise methods. To exemplify this, the code sample shown in Figure 4 contains a program that allows the robot to respond to the question ”what do you see?”. Its response includes a spoken description of the scene and changing its LED light to green. Encapsulated functions within TJBotlib allows the user condense requests to multiple AI services (Speech to Text, Text to Speech, Computer Vision) and control kit hardware (microphone, speaker, camera and led) in 13 lines of code. To further scaffold the learning experience and support a learning-by-doing approach we developed and published 3 sample code apps on GitHub that show how to (i) Control the LED on the robot using voice commands (ii) Allow the robot to connect to a live social media stream around a keyword and change its LED color based on recognized sentiment (iii) Design voice-based multi-turn conversations.

4.3. Disseminating the Project

To ensure the project was accessible to all users, we launched the project as an open source project by posting sample code and design files for the kit (laser cut and 3D print versions) on GitHub, providing detailed instructions on how to assemble the kit using the Instructables platform and sharing links to these resources on on social media (Twitter). GitHub was chosen because it is widely known, has a rich set of collaboration tools, and has an active community of developers. Instructables, was selected to host the instructional material we developed for the project given it is one of the largest online maker communities (over 29 million active users and over 100,000 projects ), and adopts a step-by-step presentation of instructions. Finally, we shared a blog post on the project on Twitter and encouraged users to share any projects they were conducting with the TJBot using the #tjbot hashtag.

5. Study

We took two complementary approaches to obtaining feedback, evaluating the kit and studying user interactions. First, we showed the kit at two conferences where we surveyed attendees at a workshop and demonstrated the kit at a booth. Next, we report on findings from analyzing data on the project’s GitHub repository.

5.1. Workshop and Survey

We conducted workshops (five workshop sessions, N=66) at two large technology oriented conferences where we invited participants of all skill levels interested in learning about and building AI enabled prototypes. Participants for the workshop were recruited on a first come first served approach - emails were sent out to conference attendees inviting them to sign up until the spaces at the workshops were filled. Each session lasted an hour, and participants worked in groups of 4 and began with a pre-workshop survey. The pre-workshop survey inquired about participants’ backgrounds, level of technical expertise, their interest in working with the kit’s hardware (sensing) capabilities or the software (cognitive services) capabilities, and the use cases they envisioned. To reduce setup time and ensure each group could write code and interact with the kit, they were provided with a fully assembled kit preloaded with starter sample code. The session began with a 15-minute introduction to the kit describing its components, the AI services available for use with the kit, and the location of tutorials (design files, sample code, instructions etc.). This introduction also included a demonstration of the kit (demonstrating speech to text, text to speech, computer vision and sentiment analysis capabilities). During the next 20 minutes, each group of participants was then guided through a hands-on assembly of the kit, where they had a cardboard cutout and followed instructions provided by a workshop leader. The leader then walked through code samples and explained how each line of code controlled a hardware component or connected to an AI service. Each group was then asked to run and modify the starter code on their assembled kit. As an example, some groups added new voice commands to trigger a change in the color of the LED on the kit, some changed the search keywords for which sentiment was analyzed and others made the kit narrate arbitrary texts. Finally, participants were invited to a 10-minute session where they were encouraged to think of projects they would like to build using the maker kit, and during which they completed a post-workshop survey. The post-workshop survey asked participants to evaluate the kit in terms of its visual appeal and ease of use (programming and assembly). At the end of the workshop, participants were given a kit to take home. While this workshop is limited in its ability to measure ”learnability” due to time constraints, it does focus on understanding envisioned use behavior and evaluations of the ease of use.

Theme % of use case comments
Before After Overall
Home automation 9 13 19
Task delegation 28 18 40
Teaching/Learning 19 10 25
General functions 6 37 42
None 30 18 41
Table 2. Themes from reported use cases for the maker kit (before and after the workshop).

5.1.1. Survey Findings

The pre-workshop survey revealed some diversity among the participants: 51% identified themselves as developers, 22% as makers, and 13% as designers. While most (70%) had over three years of programming experience, only 26% had a year or more experience working with embodied maker kits and most (74%) had no experience with AI services. This population reflects similar trends found in a large scale study of developers where only 7% of over 100k developers had machine learning or AI skills (Stackoverflow, 2018). None of the participants had any experience with the TJBot kit (although about a third of them had previously heard about it online). At the end of the workshop, 90% of participants indicated they were interested in working with AI services going forward.
How do users envision their use of the kit? Participants reported that they intended to use the TJBot kit in a number of ways. Over half of them (62%) indicated that they intended to modify the kit’s hardware (e.g. adding new types of sensors and actuators), 77% intended to create additional software components, and about half intended to pursue both kinds of expansions. It is interesting to note that most participants said they would use the kit with other people: 65% with friends or colleagues, 48% with children, and 18% with students (teaching). A minority (27%) indicated that they would only work alone. In the post-workshop survey, the large majority of participants provided positive feedback about the kit: 96%, rated it as visually appealing, 96% felt it had a good repertoire of functions, 86% felt it was easy to program, and 78% felt it was easy to build.
What are the use cases of interest? Participants were asked to describe the use cases they envisioned for TJBot at the start and at the end of the workshop. Using an iterative coding approach, the use cases provided by participants were coded into three main themes and one ”General Functions” theme as shown in Table 2. Some participants provided several ideas that fit within multiple themes - these ideas were coded separately.
Home Automation: One of the most common use cases mentioned was related building a robot that would monitor objects/activities or manage smart devices already present within a home. These use cases focused on using the bot as a voice-based interface that can be used to instantiate monitoring activities as well as report on the status of managing activities.
twitter alert each time a deer walks through my yard” - P32
home automation assistant (lights, nest), face recognition/welcome sensor” - P44
control home automation sensors to help handicap or elderly people” - P52;
Interface to my home automation system, and my personal weather system” - P37.
Task Delegation (Anthropomorphizing): This theme referred to use cases where participants referred to TJBot as an individual entity, using words like ”a friend”, ”assistant” or ”helper”. These participants often described their use cases using words that would typically describe interactions with a trusted acquaintance.
My new best friend” - P4;
I can picture an alexa-esque friend who can answer questions and do fun things” - P18. Some participants described TJBot as a ”personal assistant.” While this suggests less intimacy than the more personal framings above, it nevertheless indicates a willingness to delegate actions or responsibilities to the bot.
a friend that will greet guests at my door” - P13;
house manager”- P64;
intelligent house robot” - P54
Teaching and Learning: Participants indicated various ways in which they would use the kit for teaching and learning. Several users mentioned they would use the kit either for self-learning, or as a tool to teach their kids (and other young individuals) about artificial intelligence and computer science in general.
just learning machine programming, learn about Watson services”- P26;
using it as a way to teach my kids (10 and 12) about coding with Watson” - P5;
projects with my daughter” - P30;
getting kids interested in technology” - P51;
would like to use this as a tool to work with young girls learning about technology” - P60;
challenging my kids with development” P67; Finally, others saw the kit as an opportunity for professional learning, a tool for communicating technical ideas to non-tech-savvy audiences (clients in some cases) and a tool for developing engaging proof of concepts (POCs).
I am in the automotive industry. I can definitely use this for proof of concepts.”- P57
General Functions: Participants also came up with many other use cases and ideas that were more related to specific capabilities of the kit - e.g. ideas around the camera, audio, and general input output capabilities of the kit. Examples of these included using the bot as an announcer (announce tennis scores, weather changes, software project build status, security incidents, baby monitor, family greeter), vision recognition tool (recognize foreign currencies, analyze images to infer human activity or identity), a digital assistant for language learning, a tool for games and entertainment, virtual pets and a tool for prototyping interactive stories.

5.2. Demonstrating the Kit at Conference Booths

At each conference where we ran the workshops, we also presented the maker kit at an interactive booth. Visitors to the booth could interact with an assembled version of the kit, ask questions, and provide any feedback. Each visit which lasted typically between 10 - 20 minutes allowed us observe reactions to the kit, and assess interactions with specific use cases on the kit. In the following sections, we report on two notable observations from demonstrating the kit to visitors at the booth - we summarize three themes that describe the reaction of most individuals and then we describe how a dashboard interface we created impacted the interaction between users and the kit.

5.2.1. Reactions to the Kit

We observed 3 distinct reactions common to most users - an affective reaction where users reacted to the visual appearance of the bot, a functional reaction where users inquired about the capabilities of the bot or tried to infer this themselves and a customization reaction where they sought to learn about how they could customize the bot to use cases of interest to them.
(i) Affective: A common first reaction we observed was that users tended to anthropomorphize the bot, perceive the bot as friendly, using words like ”cute”, ”cute little guy” to describe it.
(ii) Functional: Next, users asked about the bot’s hardware and general capabilities and concrete examples of use cases where it could or had been applied. We also observed interesting assumptions made by users as they tried to interact with the bot. For example, several users would immediately attempt waving at the bot (assuming it could see) and issuing voice commands (assuming it could hear) even before they confirmed the bot was capable of these capabilities. These assumptions are likely inspired by the humanoid appearance of the bot.
(iii) Customization: Finally, in the customization dimension, users sought to understand ways in which they could extend the capabilities of the bot and understand the complexity of integrating 3rd party hardware and software components.

5.2.2. Dashboard Interface

As we demonstrated the kit during our early pilot tests, we encountered scenarios where some users were unable to understand how the bot arrived at certain responses and interactions. To address this, we created a dashboard interface (see Figure 3) - that explicitly visualizes input and output (highlighting confidence values) for each AI services used in an interaction. While our dashboard does not provide explicit reasoning for decisions (e.g. reasons why a scene is described as a hotel lobby as opposed to an airport lobby), by providing an overlay of input (e.g. captured image, or generated transcript) and outputs (e.g. image descriptions or conversation responses), we observed that it provided a foundation for users to understand and probe the error bounds of the system. In one interesting case, a participant exploring the vision capabilities of the bot first stood in front of the bot and asked it what it saw. They then repeated this three times, placing different objects (a bottle of water, a ring, a ring with a white paper behind it) in the field of view of the robot while observing its description and the image captured via the dashboard. The user could iteratively discover that the ability of the bot to understand the content of an image would improve when the image had a less noisy background hence their intuitive placement of a white sheet of paper behind the ring. At each point, having an interface that visualized or showed the robots input (image it saw) and output set (description of what it saw, see Figure 3) was useful in helping users better understand how the bot might have arrived at its answers given the image it saw. We also observed that when the bot was demonstrated with the dashboard, users were less likely to interrupt the bot, were more aware of its state and accommodating of delays in its response (sometimes caused by network latency).

5.3. Project Interaction on GitHub

Several collaborative features of the GitHub platform enabled us collect data on usage patterns - issues, stars and forks. GitHub issues allows users of a project create entries that cover topics such as bug reports, feature requests, enhancements or general feedback. Project maintainers may also use issues to keep track of open tasks. GitHub stars allows users to bookmark projects of interest and show appreciation to their maintainers; and forks - a feature that allows users create a copy of a project that they can modify without affecting the original, and enables them to contribute their modifications if they chose to. During the period of 12 months considered in this analysis, the project repository was forked 173 times and starred 314 times by users. Of those who forked the repository, 40% had owned a GitHub account for a year or less and had only a single code repository listed on their account. Users also opened a total of 47 issues mainly around requesting support on hardware or software related errors they encountered (68%), suggesting contributions to the project documentation including identifying broken links or missing content (11%) and providing general feedback on issues they encountered or fixes they had implemented (21%). A total of 6 pull requests (GitHub’s mechanism for allowing external users contribute to a code repository) were created, of which 4 were approved and merged into the main code repository.

6. Discussion

6.1. Maker kits as tools for democratizing technology

TJBot addresses its goal of democratizing AI in several ways - the degree to which it elicits interest from users, attracts novice users and engages broad audiences.

6.1.1. Eliciting Interest

Results from our survey and demonstrations, suggest that TJBot is successful in eliciting interest in AI. While 74% of workshop participants noted no previous experience with AI services, 90% mentioned they were likely to create AI prototypes after the workshop. We also observed a 3-step reaction when we demonstrated the kit to users - a positive affective reaction to the visual appearance of the kit, exploration of its capabilities and finally an expression of interest in extending these capabilities to meet personal use cases. In our workshop survey, participants also expressed interest in extending the hardware and software capabilities of the kit.

6.1.2. Attracting Novice Users

Attracting individuals to experiment with complex technology like AI can be challenging especially for first time users with limited technical skill. Interestingly, interaction pattern data for our maker kit code hosted on GitHub suggests that a significant amount of the interactions come from relatively new users. We find that 40% of users who made public forks of the code we provided had owned their GitHub accounts less than a year and had not authored any other publicly shared projects.

6.1.3. Project Impact

While it is difficult to estimate the exact number of total users and programs designed around TJBot (the project is open source and we did not include any explicit tracking software), initial data from social media and other sources suggests the project has been widely used across across a range of demographics and locations. Over the period of 12 months, instructions for the project on Instructables were viewed over 91,000 times, the TJBot library used to program the kit was downloaded over 5300 times and the project repository on GitHub received over 350 stars. Users also posted images and videos of their prototypes on Twitter spanning use cases such as connecting the bot to their IoT home devices, interactive storytelling and a range of voice based conversation examples, many in agreement with the envisioned use cases elicited during our workshop survey. We also saw group use where the maker kit was used at organizing tech meetups, corporate and informal trainings, hackathons and numerous STEM education events. The variety of interactions on Twitter spanned users from over 10 countries. We also partnered with two large maker companies 333We partnered with Adafruit Industries ( and Sparkfun ( in North America who now offer the TJBot maker kit for purchase. Based on these data points, we estimate that between 7 - 8 thousand kits have been created and used across these regions during this period.

6.1.4. Approachable design principles and engagement

Within this project, we adopted the design goals of approachable design and simplicity. We implement these goals within our selection of cardboard as the material for constructing the kit, a simple snap-in design for assembly and the choice of a relatively simple language (JavaScript) for programming it. While further research is needed to quantify the impact of the kit, we feel the approachable design choices made while designing the kit play an important role in its ability to successfully engage users. The accessible design choice (open source code, off-the-shelf hardware) made within this work also enables its ease of dissemination - the kit does not contain any custom modules and all parts can be readily purchased online.

6.2. AI Maker Kit Use Cases and Behaviors

Results from our survey provide insight on use cases of interest to users and the value of embodied maker kits.

6.2.1. Use cases

The most common use cases mentioned by participants were task delegation (40%), teaching and learning (25%) and home automation (19%). Based on the study results, we find that participants (perhaps in an indirect reference to the AI capabilities within the kit) envision use cases where the bot can take on the role of a “manager” that both performs its own sensing functions and also manages other devices to orchestrate higher level actions within a home environment. This result agrees with extant studies of DIY communities where 35% of survey respondents indicated they worked on DIY projects for home improvement (Kuznetsov and Paulos, 2010). They also suggest that users perceive value in AI agents capable to managing tasks on their behalf with important implications for the design such agents. These include robust natural language interaction capabilities, integration capabilities that allows it interface with other agents or systems and reasoning capabilities to intelligently orchestrate high level decisions.

6.2.2. Social Making

Respondents indicated that they planned to use the maker with others - with friends or colleagues (65%) and with children in an educational or family setting (48%). While further research is required to the drivers of this social/group use behavior, the current findings suggest that such maker kits are suitable for team training, education and parent-child teaching use cases. They further position maker kits as tools to introduce students to AI, with potential to address engagement challenges (Heilbronner, 2011; Watkins and Mazur, 2013) known to deter students from STEM education.

As advances in AI algorithms continue to drive the proliferation of AI, it is expected that more firms will provide black box AI services as well as professional and consumer development kits to support creativity with AI. The insights from our experience demonstrating TJBot and surveying users (use cases, social making) can help inform the design of such kits.

6.3. Challenges with the maker kit

Issue reported on GitHub suggest that while most users are able to assemble the kit and download the sample code we provide, they face the most difficulty troubleshooting hardware and software problems during installation and prototyping solutions. These included difficulties correctly issuing some command line instructions and navigating project directories, modifying software configuration to match changes in their hardware components and connectivity issues with cloud hosted AI services. We find that 68% of all recorded issues on the project’s GitHub repository are focused on requesting technical support while only 21% were focused on providing feedback or solutions. Given that most users were not technical experts, there were only 6 external code contributions to the project on GitHub. This observation highlights the importance of allocating resources to resolving technical issues as an important aspect efforts in democratizing technology. In addition to the detailed instructions as well as video guides we provided, we found it was important to have a dedicated team member to monitor issues posted on GitHub and continuously provide support. We also implemented additional automated tests and troubleshooting steps within the TJBotlib library to address some of these issues. With respect to troubleshooting hardware issues (e.g. incorrect connections, broken components, etc.), the kit hassome parts which require careful assembly (e.g. LEDs, servo motors etc.), and it is not immediately evident how to avoid this.

6.4. Interpretable Design and AI interactions

We found that in the design of interactions that depend on multiple AI services (as seen in our maker kit), interfaces or visualizations that help users make meaning of decisions made by AI can serve to improve interactions. This is related to a growing area of interest described as interpretable or explainable AI (Hohman et al., [n. d.]; Ribeiro et al., 2016) which advocates for AI implementations that explicitly help users understand the reason behind decisions, reasons for discarding alternative options, failure conditions amongst others. We believe these visualizations improve the interaction in two main ways - improve error recovery and help build trust. The feedback provided by our dashboard visualizations helped users understand when the system was failing (e.g. incorrect transcripts of a command or keyword, network delays) and helped inform steps to recovery (e.g. retry the interaction or pause). This observation is similar to results found by (Kulesza et al., 2015) where their tests indicate the implementation of an explanatory debugging interface increased user understanding of a machine learning system and allowed a more efficient correction of mistakes. Also, as noted by (Ribeiro et al., 2016) in their study on explaining predictions of machine learning classifiers, users are unlikely to use a system which they do not trust. In turn, users are also unlikely to trust systems they do not comprehend (Lipton, 2016). With the proliferation of integrated smart appliances infused with AI (e.g. connected homes with smart TVs, refrigerators, lighting systems, etc.), our findings suggest that creating visualizations that support end user interaction by offering information on the input and output of integrated AI services.

6.5. Study limitations and Future Work

We evaluated our kit using surveys and observations within a workshop setting. While this approach is common in maker kit research (Jacobs et al., 2014; Katterfeldt et al., 2009; Kazemitabaar et al., 2017; Lau et al., 2009; Meissner et al., 2017), it limits our ability to assess the effect of a workshop facilitator and workshop content on interaction behaviors. Further experimental research is needed to independently compare the maker kit approach to other approaches (for example, personal fabrication (Mellis et al., 2016)) of introducing novice users to new technology. Another limitation of this work is related to the sample of interaction data analyzed from GitHub. It is possible that not all new users who interacted with our project (built the kit and downloaded sample code) took the additional effort to create GitHub accounts or public forks limiting our ability to assess this population.

As opposed to designing a standalone AI or robotics tool, our goal with TJBot has been to engage new audiences and enable the creative design of embodied AI applications. While our current design choice of integrating cloud-hosted AI services allow users leverage state of the art versions of these services in their prototypes, it is limited by the dependence on a persistent network connection. In addition, we have received early feedback from educators eager to integrate the kit into their curriculum, indicating that further simplification is required to make the project accessible younger users (6+). This is similar to findings from (Mellis et al., 2016) where they call for tools that simplify programming components associated with the task of personal fabrication. Thus, the next stage of our work will focus on integrating lightweight AI services that run locally on the kit, automating aspects of the kit software/hardware setup, and creating visual programming interfaces such as Scratch (Resnick et al., 2009) that allow younger and lower skilled users prototype with the kit.

7. Conclusion

In this work, we described how we approached the task of designing a maker kit to help democratize the emergent area of AI. We report on design principles we applied in building the maker kit, insights we learned from demonstrating the kit at workshops as well as insights from analyzing data on how users interacted with sample code we published over a 12-month period. We find that users are interested in exploring a variety of AI use cases including home automation, task delegation, teaching and learning, that they view working with the kit as a social endeavor (most indicating they intend to use the kit with colleagues and students), and 40% of users who interacted with the sample code we shared were novices. This work contributes to the area of designing for democratization and propose maker kits as a viable approach. The design principles we present can help designers interested in on-boarding new user groups and hope to inspire future work in democratizing emergent complex technology.

8. Acknowledgement

We thank Rachel Bellamy for her enthusiastic support of this project, and are grateful to Thomas Erickson and Shari Trewin for valuable feedback on this manuscript.


  • (1)
  • Abadi et al. (2016) Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. In OSDI’16 Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation. 265–283.
  • Ames et al. (2014) MG Ames, J Bardzell, S Bardzell, and S Lindtner. 2014. Making cultures: empowerment, participation, and democracy-or not? CHI’14 Extended (2014).
  • Anderson (2012) Chris Anderson. 2012. Makers: The new industrial revolution. Vol. 1. 559–576 pages.
  • Beck et al. (2011) Andrew H. Beck, Ankur R. Sangoi, Samuel Leung, Robert J. Marinelli, Torsten O. Nielsen, Marc J. van de Vijver, Robert B. West, Matt van de Rijn, and Daphne Koller. 2011. Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival. Science Translational Medicine 3, 108 (2011), 113–108.
  • Bergstra et al. (2010) James Bergstra, Olivier Breuleux, Frederic Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math compiler in Python. Proceedings of the Python for Scientific Computing Conference (SciPy) Scipy (2010), 1–7.
  • Bertozzi et al. (2002) M. Bertozzi, A. Broggi, M. Cellario, A. Fascioli, P. Lombardi, and M. Porta. 2002. Artificial vision in road vehicles. Proc. IEEE 90, 7 (7 2002), 1258–1271.
  • Brynjolfsson and Mcafee (2017) Erik Brynjolfsson and Andrew Mcafee. 2017. The business of artificial intelligence. Harvard Business Review (2017).
  • Buechley and Eisenberg (2008) Leah Buechley and Michael Eisenberg. 2008. The LilyPad Arduino: Toward Wearable Engineering for Everyone. IEEE Pervasive Computing 7, 2 (4 2008), 12–15.
  • Buechley and Hill (2010) Leah Buechley and Benjamin Mako Hill. 2010. LilyPad in the wild: how hardware’s long tail is supporting new engineering and design communities. In Proceedings of the 8th ACM Conference on Designing Interactive Systems - DIS ’10. ACM Press, New York, New York, USA, 199.
  • Chilana et al. (2015) Parmit K. Chilana, Celena Alcock, Shruti Dembla, Anson Ho, Ada Hurst, Brett Armstrong, and Philip J. Guo. 2015. Perceptions of non-CS majors in intro programming: The rise of the conversational programmer. In 2015 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 251–259.
  • Chilana et al. (2016) Parmit K. Chilana, Rishabh Singh, and Philip J. Guo. 2016. Understanding Conversational Programmers: A Perspective from the Software Industry. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16. ACM Press, New York, New York, USA, 1462–1472.
  • Chollet (2015) François Chollet. 2015. Keras: Deep Learning library for Theano and TensorFlow. GitHub Repository (2015), 1–21.
  • Dibia et al. (2017) Victor .C. Dibia, Maryam. Ashoori, Aaron. Cox, and Justin .D. Weisz. 2017. TJBot: An open source DIY cardboard robot for programming cognitive systems. In Conference on Human Factors in Computing Systems - Proceedings.
  • Du Boulay et al. (1992) J B H Du Boulay, M J Patel, and C Taylor. 1992. Programming Environments for Novices. Computer Science Education Research (1992), 127–154.{_}4
  • Franke et al. (2006) Nikolaus Franke, Eric Von Hippel, and Martin Schreier. 2006. Finding commercially attractive user innovations: A test of lead-user theory. In Journal of Product Innovation Management, Vol. 23. 301–315.
  • Gómez-Bombarelli et al. (2016) Rafael Gómez-Bombarelli, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, David Duvenaud, Dougal Maclaurin, Martin A. Blood-Forsythe, Hyun Sik Chae, Markus Einzinger, Dong Gwang Ha, Tony Wu, Georgios Markopoulos, Soonok Jeon, Hosuk Kang, Hiroshi Miyazaki, Masaki Numata, Sunghan Kim, Wenliang Huang, Seong Ik Hong, Marc Baldo, Ryan P. Adams, and Alán Aspuru-Guzik. 2016. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Nature Materials 15, 10 (2016), 1120–1127.
  • Halverson and Sheridan (2014) Erica Rosenfeld Halverson and Kimberly Sheridan. 2014. The Maker Movement in Education. Harvard Educational Review 84, 4 (2014), 495–504.
  • Harel and Papert (1991) Idit Harel and Seymour Papert. 1991. Constructionism. In Constructionism. xi, 518.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Vol. 11-18-Dec. 1026–1034.
  • Heilbronner (2011) Nancy N. Heilbronner. 2011. Stepping onto the STEM pathway: Factors affecting talented students’ declaration of STEM majors in college. Journal for the Education of the Gifted 34, 6 (2011), 876–899.
  • Hippel (1986) Eric Von Hippel. 1986. Lead users : a source of novel product concepts. 32, 7 (1986), 791–805.
  • Hohman et al. ([n. d.]) F Hohman, M Kahng, R Pienta, DH Chau arXiv preprint arXiv, and undefined 2018. [n. d.]. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers. ([n. d.]).
  • Jacobs et al. (2014) Jennifer Jacobs, Mitchel Resnick, and Leah Buechley. 2014. Dresscode: Supporting Youth in Computational Design and Making. Constructionism (2014), 1–10.
  • Jia et al. (2014) Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the ACM International Conference on Multimedia - MM ’14. ACM Press, New York, New York, USA, 675–678.
  • Kafai et al. (2014a) Yasmin Kafai, Deborah Fields, and Kristin Searle. 2014a. Electronic Textiles as Disruptive Designs: Supporting and Challenging Maker Activities in Schools. Harvard Educational Review 84, 4 (12 2014), 532–556.
  • Kafai et al. (2014b) Yasmin B. Kafai, Eunkyoung Lee, Kristin Searle, Deborah Fields, Eliot Kaplan, and Debora Lui. 2014b. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles. ACM Transactions on Computing Education 14, 1 (3 2014), 1–20.
  • Katterfeldt et al. (2009) Eva-sophie Katterfeldt, Nadine Dittert, and Heidi Schelhowe. 2009. EduWear : Smart Textiles as Ways of Relating Computing Technology to Everyday Life. In Proceedings of the 8th International Conference on Interaction Design and Children. 9–17.
  • Kazemitabaar et al. (2017) Majeed Kazemitabaar, Jason McPeak, Alexander Jiao, Liang He, Thomas Outing, and Jon E. Froehlich. 2017. MakerWear: A Tangible Approach to Interactive Wearable Creation for Children. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17. ACM Press, New York, New York, USA, 133–145.
  • Kelleher and Pausch (2005) Caitlin Kelleher and Randy Pausch. 2005. Lowering the Barriers to Programming : a survey of programming environments and languages for novice programmers. Comput. Surveys 37, 2 (2005), 83–137.
  • Kobayashi et al. (1996) T Kobayashi, X W Xu, H MacMahon, C E Metz, and K Doi. 1996. Effect of a computer-aided diagnosis scheme on radiologists’ performance in detection of lung nodules on radiographs. Radiology 199, 3 (1996), 843 –848.
  • Kulesza et al. (2015) Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces - IUI ’15. 126–137. arXiv:cond-mat/0402594v3
  • Kuznetsov and Paulos (2010) Stacey Kuznetsov and Eric Paulos. 2010. Rise of the expert amateur: DIY projects, communities, and cultures. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries. ACM, 295–304.
  • Kuznetsov et al. (2011) Stacey Kuznetsov, Laura C Trutoiu, Casey Kute, Iris Howley, Eric Paulos, and Dan Siewiorek. 2011. Breaking Boundaries: Strategies for Mentoring Through Textile Computing Workshops. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2957–2966.
  • Lake et al. (2015) B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. Science 350, 6266 (2015), 1332–1338.
  • Lau et al. (2009) Winnie W.Y. Lau, Grace Ngai, Stephen C.F. Chan, and Joey C.Y. Cheung. 2009. Learning Programming Through Fashion and Design: A Pilot Summer Course in Wearable Computing for Middle School Students. Proceedings of the 40th ACM Technical Symposium on Computer Science Education 41, 1 (2009), 504–508.
  • Lei et al. (2014) K Lei, Y Ma, and Z Tan. 2014. Performance comparison and evaluation of web development technologies in php, python, and node. js. and Engineering (CSE), 2014 IEEE 17th … (2014).
  • Lindtner et al. (2014) S Lindtner, GD Hertz, and P Dourish. 2014. Emerging sites of HCI innovation: hackerspaces, hardware startups & incubators. of the SIGCHI Conference on Human … (2014).
  • Lipton (2016) Zachary C. Lipton. 2016. The Mythos of Model Interpretability. (6 2016).
  • Lyons and Beilock (2012) Ian M. Lyons and Sian L. Beilock. 2012. When Math Hurts: Math Anxiety Predicts Pain Network Activation in Anticipation of Doing Math. PLoS ONE 7, 10 (2012).
  • Ma et al. (2015) Junshui Ma, Robert P. Sheridan, Andy Liaw, George E. Dahl, and Vladimir Svetnik. 2015. Deep neural nets as a method for quantitative structure-activity relationships. Journal of Chemical Information and Modeling 55, 2 (2015), 263–274.
  • McCarthy et al. (1955) J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon. 1955. A Proposal for the Dartmouth Summer Research Project on Intelligence. , 11 pages.
  • Meissner et al. (2017) Janis Lena Meissner, John Vines, Janice McLaughlin, Thomas Nappey, Jekaterina Maksimova, and Peter Wright. 2017. Do-It-Yourself Empowerment as Experienced by Novice Makers with Disabilities. In Proceedings of the 2017 Conference on Designing Interactive Systems - DIS ’17. 1053–1065.
  • Mellis et al. (2016) David A. Mellis, Leah Buechley, Mitchel Resnick, and Björn Hartmann. 2016. Engaging Amateurs in the Design, Fabrication, and Assembly of Electronic Devices. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems - DIS ’16. 1270–1281.
  • Meyerovich and Rabkin (2013) Leo A. Meyerovich and Ariel S. Rabkin. 2013. Empirical analysis of programming language adoption. In Proceedings of the 2013 ACM SIGPLAN international conference on Object oriented programming systems languages & applications - OOPSLA ’13. ACM Press, New York, New York, USA, 1–18.
  • Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529–533.
  • Morrison et al. (2000) Pamela D. Morrison, John H. Roberts, and Eric von Hippel. 2000. Determinants of User Innovation and Innovation Sharing in a Local Market. Management Science 46, 12 (12 2000), 1513–1527.
  • Myers and Ko (2009) Brad Myers and Andrew Ko. 2009. The Past , Present and Future of Programming in HCI. Human Factors (2009), 1–3.
  • Natarajan and Behler (2016) Suresh Kondati Natarajan and Jörg Behler. 2016. Neural network molecular dynamics simulations of solid–liquid interfaces: water at low-index copper surfaces. Physical Chemistry Chemical Physics 18, 41 (2016), 28704–28725.
  • Nix et al. (2015) Samantha Nix, Lara Perez-Felkner, and Kirby Thomas. 2015. Perceived mathematical ability under challenge: a longitudinal perspective on sex segregation among STEM degree fields. Frontiers in psychology 6 (2015), 530.
  • Papert (1980) Seymour Papert. 1980. Mindstorms : children, computers, and powerful ideas. Basic Books.
  • Peppler and Glosson (2013) Kylie Peppler and Diane Glosson. 2013. Stitching Circuits: Learning About Circuitry Through E-textile Materials. Journal of Science Education and Technology 22, 5 (2013), 751–763.
  • Resnick et al. (2009) Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, J a Y Silver, Brian Silverman, and Yasmin Kafai. 2009. Scratch: Programming for All. Commun. ACM 52 (2009), 60–67.
  • Resnick et al. (1988) M Resnick, Stephen Ocko, and S Papert. 1988. LEGO, Logo, and design. Children’s Environments Quarterly 5 (1988), 14–18.
  • Resnick and Silverman (2005) Mitchel Resnick and Brian Silverman. 2005. Some Reflections on Designing Construction Kits for Kids. In Proceeding of the 2005 conference on Interaction design and children (IDC ’05). 117–122.
  • Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why Should I Trust You?Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’16 39, 2011 (2016), 117831.
  • Rode et al. (2015) Jennifer A. Rode, Anne Weibert, Andrea Marshall, Konstantin Aal, Thomas von Rekowski, Houda El Mimouni, and Jennifer Booker. 2015. From computational thinking to computational making. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’15. ACM Press, New York, New York, USA, 239–250.
  • Sendek et al. (2017) Austin D. Sendek, Qian Yang, Ekin D. Cubuk, Karel-Alexander N. Duerloo, Yi Cui, and Evan J. Reed. 2017. Holistic computational structure screening of more than 12 000 candidates for solid lithium-ion conductor materials. Energy & Environmental Science 10, 1 (2017), 306–320.
  • Sivek (2011) S. C. Sivek. 2011. “We Need a Showing of All Hands” Technological Utopianism in MAKE Magazine. Journal of Communication Inquiry 35, 3 (7 2011), 187–209.
  • Somanath et al. (2017) Sowmya Somanath, Lora Oehlberg, Janette Hughes, Ehud Sharlin, and Mario Costa Sousa. 2017. ’Maker’ within Constraints: Exploratory Study of Young Learners using Arduino at a High School in India. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17. ACM Press, New York, New York, USA, 96–108.
  • Spohrer and Banavar (2015) Jim Spohrer and Guruduth Banavar. 2015. Cognition as a Service: An Industry Perspective. AI Magazine 36, 4 (2015), 71–86.
  • Stackoverflow (2018) Stackoverflow. 2018. Stack Overflow Developer Survey 2018. Technical Report.
  • Tanenbaum et al. (2013) Joshua G. Tanenbaum, Amanda M. Williams, Audrey Desjardins, and Karen Tanenbaum. 2013. Democratizing technology: pleasure, utility and expressiveness in DIY and maker practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13. ACM Press, New York, New York, USA, 2603.
  • Tilkov and Vinoski (2010) S Tilkov and S Vinoski. 2010. Node. js: Using JavaScript to build high-performance network programs. IEEE Internet Computing (2010).
  • Urban and von Hippel (1988) Glen L. Urban and Eric von Hippel. 1988. Lead User Analyses for the Development of New Industrial Products. Management Science 34, 5 (1988), 569–582.
  • von Hippel (2005) Eric von Hippel. 2005. Democratizing innovation: The evolving phenomenon of user innovation. Journal fur Betriebswirtschaft 55, 1 (3 2005).
  • von Hippel (2006) Eric von Hippel. 2006. Application: Toolkits for User Innovation and Custom Design. In Democratizing Innovation. 147–164.
  • von Hippel and Katz (2002) Eric von Hippel and Ralph Katz. 2002. Shifting Innovation to Users via Toolkits. , 821–833 pages.
  • Vyborny and Giger (1994) C. J. Vyborny and M. L. Giger. 1994. Computer vision and artificial intelligence in mammography. , 699–708 pages.
  • Watkins and Mazur (2013) By Jessica Watkins and Eric Mazur. 2013. Retaining Students in Science, Technology, Engineering, and Mathematics (STEM) Majors. Journal of College Science Teaching 42, 5 (2013), 36–41.{_}042{_}05{_}36
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description