Multimedia Whitepaper

Multimedia Whitepaper

2018 Symposium on Artificial Intelligence in Education

Multimedia Whitepaper

Matt Zucca, Tylor Burrows, Nicolas Rutherford, Radamis Zaky
JULY, 2019

Executive Summary

The E21 Consortium’s first annual symposium was held at the University of Ottawa on October 21-22, 2018. The topic of artificial intelligence and education was chosen, as AI technologies are set to revolutionize both educational contexts and the broader societies in which they are situated.

This report briefly describes the event, then presents an analysis of artefacts documenting the symposium in the form of videos, texts, and posters. Key themes explored in the data were:

UNDERSTANDINGS AND PERCEPTIONS

Panelists and participants had various different understandings and perceptions of both artificial intelligence specifically and education generally. Many believed that AI would impact teaching and learning in significant ways, but expressed different views about how AI should be integrated into educational contexts.

CHALLENGES

Working out how to best utilize AI in a classroom setting was one of the challenges identified by panelists and participants. How can educators teach, and how can students learn, with AI tools? And can we teach today for the kinds of jobs we expect will emerge in the future?

APPLICATIONS

Participants saw AI as applicable to every aspect of education. Personalized learning, emotional support, and logistic support were the key recurring applications envisioned by participants.

CONCERNS

Attention to ethics and diversity emerged as a strong concern of participants. Discussions centered around the risk of AI applications internalizing biases that could unfairly discriminate against minority groups. There were also concerns that some might have limited access to AI technologies as they emerge and that this would deepen socioeconomic inequality.

BENEFITS

Utilizing AI tools in education could mean achieving better teaching and learning outcomes in the classroom as well as better preparing students for the jobs of tomorrow, which panelists and participants believed will especially value critical thinking and creative skills. However, expectations regarding the benefits of AI should be tempered: AI is not a panacea.

SOLUTIONS

They were mostly in the realm of diversity and inclusion. Specifically, the formation of groups dedicated to minority access in the development of AI was highlighted. Promoting interdisciplinarity, the awareness of the importance of diversity, and the use of datasheets to describe datasets were also put forward as potential solutions.

The report’s conclusions directly address the four questions about artificial intelligence in education that were posed at the outset of the symposium:

The extent to which an institution of higher education can influence how AI is developed and applied to education will likely be tied to opportunities for stakeholders (including students, teachers, researchers, industry, government, etc.) to work with each other and especially with those outside of their own everyday experience.  Recent developments at the University of Toronto, such as the Schwartz Reisman Innovation Centre and the Vector Centre, are examples of how civic minded members of society, governments, and private industry can work through and with the university system to develop AI systems with attention to social consequences, such as the way in which we interact with each other.

If there was any emerging directive regarding AI in education it would be to teach students when and where to apply AI solutions. ‘Simulations’ and working together were prized over lecture based learning and standardized testing. For example, one student suggested that “AI needs to help solve collective goods problems,” which would require our shared co-operation. It was argued that educational institutions should give students the opportunity to experiment with artificial intelligence to solve problems, and to learn when, where, why and how to use AI tools. There were also calls to radically change the education system, but the vision for what that would look like is uncertain.

Benefits were not explicitly discussed by participants and panelists as much as concerns and challenges. E21 Consortium member Contact North has pointed out in Ten Facts About Artificial Intelligence in Teaching and Learning that AI is currently being applied to personalised learning, student advising, student assessment, accessibility for students with disabilities, learning analytics, and more.

Symposium participants saw AI as potentially applicable to every aspect of education. Personalized learning, emotional support, and logistic support were the key recurring beneficial applications envisioned by participants. The following extract from a participant poster summarizes various ways that AI can be a benefit to higher education contexts:

Attention to ethics and diversity emerged as a strong concern of participants, but also one that complicates the process as humanity has no unified view of how to assuage these concerns, let alone the question of how to accommodate them in poorly understood AI systems.

Several participants felt that developments in artificial intelligence that did not address the needs of the entire population of Earth were empty and destructive. They noticed that whatever form a technology takes, there are many who are unable to use it.

2018 Symposium on Artificial Intelligence in Education Multimedia Whitepaper

I. BACKGROUND

In furtherance of the E21 Consortium goal of encouraging disruptive dialogue within education, on October 21-22, 2018, at the University of Ottawa, the first E21 symposium was held on Artificial Intelligence in 21stCentury Education.

Advances in computer technology such as raw processing power and the neural networks that drive machine learning, combined with newly available large data sets, have led to an explosion in ‘artificial intelligence’ (AI) applications at present and over the horizon. Medical image processing, automated personal assistants, and financial investment algorithms are already in use, while self driving cars and highly customizable robotics seem just around the corner. These advances will continue to be adopted across services, industries, and economies. For educators this poses questions about how to respond. How will artificial intelligence influence what is taught to students? How will artificial intelligence influence how students are taught?

Hoping to continue building a network around imagining education in the 21st century, the E21 Consortium invited a host of panelists, sponsors, students, experts, and other attendees to participate in their vision.

To guide the planning and give a direction for the symposium, such as panelist selection and symposium activities, the following initial questions were posed:

  1. What configurations of power, economics and industry drive the design, development and deployment of AI? To what extent can institutions of higher education play a role in the harnessing of AI?
  2. To what extent do the dictates and demands of the push towards ever-increasing development of AI – as played out in research and private sector interests – displace the traditional skills, knowledge and expertise that our institutions of higher learning continue to train and educate its students for? What societal safeguards might we need to create and implement in light of such a potential radical disruption?
  3. What are some of the current benefits AI has provided to institutions of higher education in administrative and disciplinary applications?
  4. How might we conceive of a more equitably-designed, developed and deployed AI in light of the rate and range of disruption some have forecast as inevitable?
PARTICIPANTS

The participants registered for the event included students from high school to the doctoral level, as well as educators, researchers, and other interested parties from the public and private sectors. Having input from a range of stakeholders and perspectives was considered crucial to the success of the symposium. Figure 1 provides a breakdown of participant numbers.

Figure 1: Breakdown of participants in the symposium

To address the multidisciplinary and complex questions posed at the planning stage, the panelists were selected from various disciplinary backgrounds including ethics, law, art, film, computer science, and management. Figure 2 displays the invited panelists, including their affiliation.

Rediet Abebe

Sofian Audry

James Barrat

Chris Dede

Stephen Downes

Nevena Francetic

Timnit Gebru

Abhishek Gupta

Ian Kerr

Matthew Mckean

Alastair Summerlee

Figure 2: Invited Panelists

This year’s event was organized and sponsored by several technology companies and educational institutions, as well as the City of Ottawa. The lead organizer was the University of Ottawa. Figure 3 shows the logos of E21 Consortium members, as well as sponsoring organizations.

Thank You To Our Consortium Members:

Our sponsors who made this event possible

Figure 3: Consortium members, and symposium supporting organisations

II. SYMPOSIUM ACTIVITIES

On the evening of October 21st a reception hosted by Shopify welcomed participants and panelists. In addition to providing a networking opportunity for participants, panelists in attendance were invited to give a brief introduction to themselves and to pose a question for participants to consider in advance of the next day’s activities (see Figure 4).

Can we solve some of the inequalities in the world with our understanding of AI?

– Stephen Downes

How do we destroy undergraduate education as we know it, but keep the human element to where we learn to interact with each other?

– Alastair Summerlee

How do we educate people for a jobless future?

– Sofian Audry

What can people working with a different kind of intelligence (AI) do together as a partnership?

– Chris Dede

Technology is ubiquitous in the world today. Is this for the better or for the worse? Are we developing social relationships with cell phones and other devices, are we growing apart as a society?

– Nevena Francetic

How do we better prepare ourselves to collaborate with AI machines, and how do we harness the new knowledge that is going to come out of these systems?

– Abhishek Gupta

What counts as creativity? What are the important things that universities and institutions should be teaching?

– Ian Kerr

Figure 4: Questions posed by panelists as part of their remarks at the Shopify reception

The symposium’s main event on the 22nd began with the president of the University of Ottawa, Jacques Frémont, welcoming the panelists and participants, followed by an introduction by the University of Ottawa’s Vice Provost for Academic Affairs, Dr. Aline Germain-Rutherford.

The symposium activities began with three interactive panel discussions, each comprised of three panelists and a facilitator. These panels, which each lasted 75 minutes, were supplemented with pre-recorded video clips from remote panelists, and began with the panelists speaking to a specific topic with audience questions and comments woven in by the facilitator. Audience participation/engagement was further promoted through a Twitter hashtag (#E21sym) and the collaborative online tool Nureva Wall (see Figures 5 and 6), with both displayed in the symposium venue (see Figure 6). The panels were live streamed and recorded.

Figure 5: An example of participants’ posts to the Nureva board

The three guiding probes were as follows, although the facilitators each chose to ‘disrupt’ the question with minor changes:

PROBE 1

To what extent does the drive towards ever-increasing development of AI – as played out in realms of research and private-sector interests – displace traditional skills, knowledge and expertise? What impact does this have on current educational offerings? What societal safeguards might we need to create and implement in light of such a potentially radical disruption?

Panelists: Chris Dede, Ian Kerr, James Barrat

PROBE 2

What are some of the current and potential benefits AI can bring to education? To what extent can educational institutions play a role in the harnessing of AI?

Panelists: Alastair Summerlee, Matthew McKean, Sofian Audry

Facilitator: Nevena Francetic

PROBE 3

How might an ethically and morally-informed AI be conceived in a culturally diverse global context? How might we realize a more equitably-designed, developed and deployed AI in light of the rate and range of disruption some have forecast as inevitable?

Panelists: Abhishek Gupta, Nevena Francetic, Stephen Downes and video clips of Timnit Gebru & Rediet Abebe

Facilitator: Tylor Burrows

Figure 6: Symposium venue, with Nureva board being displayed on screens

Following the interactive panel discussions, two ‘disruptive dialogue’ sessions were held. These sessions provided the participants with an opportunity to imagine the role of AI in education by using flip-chart paper to design an educational robot on paper.

Participants were first asked to identify an education problem and create an AI robot to solve it. They were to: Name it, Draw it, Label key features, and Explain how it fixes the problem. After this, they: Shared it with their team, Used stickers to highlight great features on any robot, and Took a picture to post on Nureva.

After the individual poster session, a team robot was created. This time, participants were asked to: Identify a wicked education problem, Co-create a Team AI Education Robot, Name and draw it, Describe its features, Pretend it was human (what would it feel, see, hear, sense?), Use stickers to identify favorite features, and Pick a team representative to present at a plenary at the end of the second disruptive dialogue session.

The final session of the day was a statement from panelist Stephen Downes which opened up, extended, and clarified some of the key ideas that emerged during the symposium.

Between each of these activities was a short break providing participants the opportunity to network as well as appear in video-interviews regarding the symposium, using the ‘vox-pop’ format.

III. MEANINGFUL PERSPECTIVES: THEMATIC FINDINGS

The symposium activities yielded a varied and generous data set including:

  • Videos of the interactive panel discussions
  • Videos of interviews with remote panelists
  • Vox pop videos of participants
  • #E21sym tweets by participants
  • Nureva Wall posts by participants
  • Posters created by participants

We collected these together and, after an initial scanning of what had come out of the symposium, established a thematic frame which could incorporate the breadth of materials collected.

Figure 7: Visual summary of the key points extracted from the data analysis
1. Understandings and Perceptions

Artificial intelligence is fast becoming an integral part of our daily lives. But not everyone is in agreement about how it should be defined. In common parlance, machine learning, automation, and other algorithmic feats are often considered as AI. Furthermore, public perception of AI shows both unrealistic fears and improbable expectations. For AI experts, though, AI can be broadly defined as any simulation of human intelligence processes (like learning, reasoning and self-correction) by machines, especially computer systems.

Participants and panelists had different and sometimes conflicting definitions of AI. For instance, panelist Chris Dede tried to get to the heart of artificial intelligence by comparing it to a human:

AI has different meaning to different people but fundamentally it is about the fact that machines have different strengths than people do. Machines have much larger short term memories than people do… Machines are tireless and not bored by repetitive activities and you can come up with a long list of similar things you can also list a whole bunch of things that people are better at than machines…

Link to YouTube video >

Yet there were conflicting views surfacing online. Throughout the presentation participants were doubtful of the usefulness of the comparison made between human beings and machines. Some Nureva users wondered “AI as a metaphor for everything that is not human… how useful is that?” Meanwhile, a Twitter user suggested:

At #E21Sym @oldaily … we are kidding ourselves if we think humans are especially better at some things than #ai … AI is getting better all the time and can be creative and become empathetic.

Furthermore, participants were sometimes divided on what it means to learn, to know, to teach, and to educate. But most agreed that critical thinking was extremely important for today’s and tomorrow’s students. The “importance of teaching children critical thinking and theory of knowledge,” as well as the belief that “AI cannot replace education in critical thinking” were widely affirmed and repeated. At the same time, some participants questioned how easily critical thinking skills can be efficiently taught. Facilitator and panelist Alastair Summerlee was quoted on Nureva saying “don’t prepare us for jobs, train us to think and feel!” Others who were critical tried to pin down exactly what a creative skill was and how it relates to AI. One idea to incorporate AI in education was to “Give kids problems so they realize why, and when, they need to use AI #E21SYM.” During the first panel Summerlee confessed his long-time ambition of destroying undergrad studies. Afterwards, participants expressed dissatisfaction and concern about today’s educational system.

There was a small but repetitive current of posters to Nureva who were unsure of the effects of this new technology on humans. Some argued that “technological dependence leads to decreased creativity,” while others were more focused on its effect on children. They doubted that AI could support brain development and asserted that “there is plenty of research on the impact of technology on the brains of children growing up. It can affect their brains. Tech is good but [for] children?” Others were sure that they “don’t want AI to ‘decide’ what I learn.”

Regarding teaching, some participants believed that “teaching must lead to inspiration” while others were uncertain whether teachers’ presence was needed since face-to-face contact with teachers is possible online. Yet, the low completion rate of massively open online courses was cited as an example of the failure of online education initiatives. Someone went as far as saying that “We are not training children. We are training tomorrow’s adult cyborgs.” There was a desire for “AI [to] be used to broaden schooling for those that struggle with standardized testing.” Learning styles and methods of teaching were brought up in response to the supposed failure of standardized testing to properly measure aptitude. In some cases AI was seen as the solution to aptitude testing, while in others it might only exacerbate its flaws, arguing that “AI grading might accept ‘correct’ but not diverse answers.”

Ultimately, the ambiguity of the term artificial intelligence and what it is capable ofled to many conflicting questions and concerns, several of which were not addressed.

2. Concerns

The development and deployment of AI into many sectors and fields today is fueling some concerns, notably about its effects on employment and inequality. It also invites general questions about ethics and the risk of harmful biases being programmed into AI machines. AI systems that are underpinned by deep learning algorithms depend on data being fed into them. When this data is being selected by humans, the potential for biases being passed on is inherent and should be monitored closely.

Symposium participants were adamant that having a diverse team of people creating artificial intelligence is superior to any other arrangement. It was argued that “Questions, solutions, and impact will depend on the background and experience of the scientist(s) doing the work, so diversity is important.”

Several times during the discussion of this topic participants, both online and off, criticized the symposium as well as the development of artificial intelligence as being biased in favor of ‘Western values.’ They suggested things like “AI was invented in North America, ergo Western perspective of the tech,” and became skeptical of the ethics that would be built into it asking “Whose ethics would the AI be based on if people’s views on ethics differ?” and “Are we going to be able to create an AI that is unbiased?”. Panelist Stephen Downes suggested that the problem of ethics in AI might be doomed from the start by saying “Our big problem with AI and ethics is the fear, a genuine fear, that an AI might act like we do when it comes to ethics and it might make the kinds of ethical decisions that we already make. So, I think we need to think of ethics and how we are doing that as a species before we start thinking about ethics in AI.”

To mitigate the problem of bias in AI some posters and speakers suggested that a diverse background of developers in AI will be able to represent different interests, as well as to see things that are normally unseen. They were anxious about the representation of the marginalized, who, it was widely felt, AI could leave behind.

For example, some worried “Where is the dialogue referencing those with no www access?”. On the other hand, some users thought that those posters were confusing access to the Internet with the problems of artificial intelligence. Yet, posters seemed to support the idea that “The Internet is the gateway to the modern economy, education, information.” Meanwhile, others thought that “If the data is available to everyone” there will be nothing to worry about. Many wondered if AI will “erase the ever increasing chasm between the have and have-nots?” Panelist Chris Dede gave an example of how this might happen, and how educators can try to stop it:

Could this device (Super GPS) be used to widen inequality in the society? Absolutely, if I’m part of the elite and I have that super GPS, think of the kind of investments that I would know how to make or think of the kind of ways that I could manipulate what is happening around me in my local environment. So, back to the elementary school, I don’t want to have students learning what the AI is going to know other than foundational knowledge because this is John Henry competing with the steam engine. I want them to know how to use [AI]… to avoid widening the gap between the have and have-nots.

– Chris Dede

Dede thought of advanced technology like a ‘sorcerer’s apprentice’ which people should learn to use, rather than to compete with. He thought that by teaching people to use this sorcerer’s apprentice we could avoid widening the gap between the haves and the have-nots. In the case of the super GPS, it is only a stand-in technology that provides a hypothetical case where someone with a better understanding and easier access to an advanced technology could make better choices for themselves.

Others anticipated unpredictable change and possible benefits as well as drawbacks. They implied that jobs may not ‘disappear’ but change, and the idea was often repeated that AI could “create new jobs that we cannot fathom: think of web devs pre-web era.”

One of the largest challenges that participants believed we would face was an AI-created apocalypse scenario. Though panelist Ian Kerr was ‘cautiously apocalyptic’ he and many others were focused on the positive potentials and more conservative drawbacks of artificial intelligence. However, despite these assurances, people continued to bring up their fears. They argued that ‘AI will develop because of Defense,” and that “bad actors are presently working on weaponizing AI”. Geopolitically, the “western countries are against the eastern ones in developing AI. Cold war” and “AI could be a winner-take-all race”. Many people were concerned about the international consequences of AI development for military use and asked “If no one is attacking you why increase your defense budget?”.

3. Challenges

To tackle current and upcoming challenges in AI posters felt that “AI is not the same as other domains and needs its own solutions,” yet it is “being foisted on society with no democratic discussion.” They wondered if “there [were] political parties tackling AI realistically?” and proposed that “a diverse team of ppl can make decisions. It’s just very expensive vs AI.” On Twitter people asked the panelists directly about the circumstances of machine-guided decision making, as well as the lack of oversight in existing deployments of AI in the private sector:

Under what circumstances do we delegate decision making to machines? #E21sym

– @ianrkerr (Ian Kerr)

We should be skeptical of corporations developing AI. #E21sym E.g. databases can be biased. Where is the oversight?

– @jrbarrat (James Barrat)

Noticing that AI was developing so fast that it could outpace democratic discussion one participant explained:

Artificial intelligence is an area that is developing very, very quickly, maybe too quickly. Our education system, in my opinion, is no longer adapted to the realities of students in our societies that are jumping onto AI. There is an urgent need I think to facilitate these conversations with the different actors who are important for this conversation: students, lots of students, researchers, thinkers, innovators, people who also bring ideas that are contradictory, controversial…

– @jrbarrat (James Barrat)

Panelist James Barrat drew attention to the challenges facing the connection between employment and education. He put forward the idea that jobs would be lost or changed, and suggested avenues for students in light of those expectations:

I’m not sure AI has a lot to offer those primary kids. In fact I would encourage them to turn out and walk the other way because the jobs of the future will be taken by a lot of AI and by a lot of automation. PricewaterhouseCoopers has one estimation, McKinsey has another. Everybody has a number and the number I ran into this morning was that by 2030 between 400 and 800 million jobs will be replaced by AI. So those kids shouldn’t be… they should either be hard core programmers and get into the game and be robotics engineers and go study tort law for autonomous vehicles… but I would encourage them to do the other thing and that’s to do the things that AI will never do and that’s interpersonal communication, that’s emotional intelligence, that’s creativity and curiosity.

Link to YouTube video >

Others were more uncertain, “the point is whether AI is going to remove teachers and even learners or?” Some, assuming educational institutions will continue to exist, focused in on computer science. They wanted to know if ethics classes were mandatory for computer science programs, and someone suggested that “training on ethics and society lags behind technical training.” The consensus was that the education system is outdated, and many people echoed the statement that “Most teachers that have been teaching the same curriculum for a long time — are using the same things ie. tests and it should be more diverse.” Exacerbating the problem, however, was the recurring idea about what counts as knowledge, as well as how to identify something that works. As Stephen Downes pointed out at the end of the symposium:

What counts as data? What counts as evidence? We don’t really think about that, what would count as evidence that some solution works right? We have this presumption this solution works. What does this mean? Does that mean people memorize things better? Does it mean they like it better? Does it mean they stay out of jail? What kind of data would be evidence for what works? Not yet clear.

– Stephen Downes

How can AI help students think outside of the box? Overall, people felt that education itself should be more personalized, it should take into account student’s emotions, and it should help students think outside of the box. It should be affordable too, else many schools will not have access.

Ian Kerr urged educators to come up with a way to decide when, where, and why to use technologies in classrooms, lest they run into the problem of PowerPoints:

One of the things we sometimes have to think about is exactly what underlies the technology and the decisions that led up to its adoption, to use it in a particular concept. So whether it is an iPad in the classroom or whether it is PowerPoint in the lecture hall. There is a whole sort of political, economic analysis that underlies why those tools are being used so I think it may short change the problem a little bit to simply say we just need to better distribute the technology as if the technology is a magic pill that solves everything in that way… So, I think part of answering the question involves asking the deeper question of why is that technology there in the first place and to what extent do we need that technology? To what extent we can do other things to foster the same skill set and then to ask questions about how we promulgate all those things in ways to provide greater access?

Link to YouTube video >

Finally, there were a few statements and posts that didn’t fit easily into one category, or spanned so many that they deserve an honorable mention. For instance, Chris Dede was especially critical of what he saw as a failure of institutions, especially education, to use the research that was already out there:

I think the biggest problem in education is not that we can’t keep up with the research but we don’t follow anything that is in it. In the year 2002 the US national research council published its second most popular report ever called “How people learn?“ and in 2005 they published “How students learn in Science, Mathematics and History, pointing out that those three are completely different, and a month ago they published “How people learn?“ version 2.0 which brings in much that happens in neuroscience and much that is happening in understanding culture since “How people learn?“ was published, so the principles are all there, what we need to do is all there… The problem is that we are not doing it and AI can’t fix that but we can.

Link to YouTube video >

Others brought up long-standing and critical philosophical problems such as: the difference between right and wrong, who defines what is right and wrong, sense and nonsense; the idea that “religious and secular ethics are products of human nature so the difference ultimately does not change much”; and the considerations of a “Consequentialist versus Deontological point of view.” These kinds of posts appeared every once in a while and went unresolved. Others discussed whether it was possible to accurately predict the future, even with the help of artificial intelligence, whether code had ethics that were identifiable before its deployment, or the identification and consequences of ‘errors’ made by an AI.

4. (Potential) Benefits

The benefits of AI in education were not highlighted by panelists and participants as much as concerns and challenges were. However a common vision of the (potential) benefits of AI did emerge between the lines.

Many participants believed that the emergence of AI in society not only meant that new skills would have to be taught in schools so as to best prepare students for the future, but that bringing AI inside of the classroom would also foster new and better ways of teaching and learning. Several participants expressed a tempered hope that with AI tools, we might achieve better teaching and learning outcomes, in part because we’d move away from standardized testing models – “AI should be used to broaden schooling for those that struggle with standardized testing” – and allow for more effective personalized learning, for example. This would in turn better prepare students for the job markets of tomorrow, or empower them to invent new occupations altogether. For participants, though, there is still much that will need to be figured out: “Teaching must lead to inspiration. How can AI contribute?”, and “How can AI help us change traditional classroom settings?”.

There seemed to be a general feeling among participants, especially the students, that AI could and should be beneficial for society: it could “solve social problems that still exist, unaddressed” and “[…] AI needs to help solve collective goods problems”. On a macro scale, there was even hope that it would address some of the inequalities that persist between the global North and South, where access to the internet and its associated learning potential, is still limited. One nureva user proposed, in terms of learning, that “AI can be used to create toys that teach children in developing countries”. But several users pointed out that many of the issues that poor and marginalized people face abroad, and in Canada, could be addressed right now, without AI tools: participants and panelists were clear in stating that AI is not a panacea, despite the great potential it carries. It cannot, and will not, solve every problem that people or communities face.

Remote panelist Abebe pointed out that AI techniques could help to reduce inequality, while at the same time remote panelist Gebru added that AI can exacerbate inequality or provide opportunities, depending on how it is implemented. In order for everyone to benefit from AI in education, being mindful of inclusivity and representation in the development and deployment of AI tools should be at the center of any action.

5. Applications

During the “disruptive dialogue” activity, participants were asked to draw their ideal AI robot in education and label its key features and special abilities.

Figure 8: Participants with their posters

In total there were 134 references to AI applications included on the posters. The three main applications were personalized learning (49), emotional support (14), and logistic support (10).

Participants showed considerable interest in the type of AI robot that “teaches every student in a unique way, tries to understand every student’s point of view”. Examples of this were varied, and were succinctly if inadvertently summarised by one participant who envisioned a system that:

Collects information on student goals, assesses student knowledge, tailors content to student interests, tailors content to previous learning, interacts with all students individually, creates tailored learning activities designed to appeal to students’ interests and goals, provides positive feedback.

One example of personalised learning supported by an AI robot also fosters inclusionary education, specifically that it “Provides learning assistance specific to disability”. Personalised learning extends to motivation, with participants interested in AI robots that could “identify when students are bored by reading [students’] body language”, or relates what students need to learn to their actual interests and real life situations.

Personalised learning also applies to assessment, which is an area of education already  achieved with computers. Simple technology such as multiple choice are now accessible to anyone with access to an Internet computer, and AI-powered language processing is being tested on student writing.  The ability to “customize assessment to the learner” is both related to personalized learning and to automated assessment generally which became a sub theme also.

The second most prevalent type of application was one which could provide emotional support. One of the posters to Nureva pointed out that, in the future, “AI can/will be capable of so-called human traits like creativity, empathy”. This is likely because, for example, facial recognition software that gauges mood is already under development. One of the participants’ posters incorporated this and other forms of emotional support with a robot that:

Encourages you to do what you need to. Reads facial cues and body language to detect mood, works with the mood detected. Gives you confidence when you do well, increasing self esteem.

Other descriptions of emotional support-enabled AI educational robots included that they would be “Friendly, safe” and “100% confidential, will talk to you about any issues you have in life. Will always be there for you”.

Finally, logistic support, while not directly related to pedagogy, is a crucial aspect of education that the participants felt AI robots could support. From using “advanced algorithms to create unique solutions for transportation costs” to providing essentials like food and pencils, to help finding income or “services (e.g. medical, therapy)”. An AI robot which “deals with bureaucracy” is a suggestion which highlights a burden that has counterintuitively gotten heavier for both students and educators in conjunction with the rise of supporting technologies.

Other specific applications of AI in education encapsulated in the posters included automated assessment, teacher assistant, and experiential learning.

6. Solutions

Solutions which participants brought up were primarily related to AI development which is inclusive of diversity. In terms of existing solutions, remote panelists Abebe and Gebru both belong to and spoke of special interest groups which are focused around minorities and aim to promote minority inclusion in AI development. These groups may be interdisciplinary and data-driven, and can augment existing AI-oriented research labs and masters programs by and for the developing world. Examples of activities conducted by these groups in order to bring together and support diverse and geographically disconnected community members include a Facebook forum for support and communication, conference activities such as workshops designed specifically for minority groups to showcase, share, and network.

Another existing solution for reducing bias in the development of AI systems was shared on the Nureva Wall. This was a description and link to a machine learning crash course module offered by Google and “which looks at different types of human biases that can manifest in training data, and provides strategies to identify them and evaluate their effects.” A link was provided for interested parties to learn more. The availability of this module is a helpful step towards helping engineers to consider ethical issues more deliberately, a proposal put forth by remote panelist Gebru. Remote panelist Abebe also recommended that developers keep ethics in mind, as well as engaging with other disciplines and being aware of the diversity existing in their local spaces.

Gebru expressed the need for more diversity in the data sets used to train AI, an issue which was highlighted by the October, 2018, disclosure that Amazon decided not to move forward with developing an AI recruiting system which was found to be biased in favor of men. In order to evaluate datasets, Gebru and colleagues have written a whitepaper which aims to encourage those who are releasing data sets to write documentation about those datasets (e.g. distribution, intended uses) to help users determine if a data set should be used for a situation. The whitepaper is available here. As mentioned in the ‘Challenges’ section above, one user opined that “AI is not the same as other domains and needs its own solutions,” however, Gebru proposes that AI developers can learn from other industries about how to address problems and put in safeguards before AI technology is too ubiquitous.

IV. MEANINGFUL CONTRADICTIONS: CONCEPT MAPPING

The E21 symposium on education and artificial intelligence provided participants with opportunities to contribute to the discussion through online posts, panel discussions, and poster creation. Whereas the thematic findings, above, cover the breadth of the symposium, by considering underlying contradictions in the themes a conceptual map of deeper meaning emerges.

What did participants try to communicate about artificial intelligence in education? Which questions, ideas, and desires were participants trying to express? There are no easy answers to this question, so we did several concept maps and contradiction maps until we arrived at something that seemed to make sense and change how we thought about the data. We mapped the concepts that we thought people were talking about as well as our own concepts as we worked through the data. We also broke down every unit of data into ‘contradictions’ (e.g. Access/No Access, Employment/Education, Custom/Standardized) until we arrived at unresolvable contradictions in the data. Doing this created a lot of questions and new ways of thinking about what people were communicating about artificial intelligence in education.

Figure 9: Mapping of concepts from the E21 symposium on artificial intelligence in education

On the surface, artificial intelligence in general was central to participants’ communications, rather than a consistent and specific focus on AI in education. Although many concerns did not seem directly related to the issue of AI in education, after reconsidering the data, it becomes clearer that participants were trying to situate artificial intelligence in education. Taken together, we can think of the collection of our data across media as a complex, multi-faceted discussion about artificial intelligence in education. What follows is a discussion and synthesis of the recurring, potent, or provocative imaginings from the E21 Symposium. Figure 9 illustrates the concept map which emerged from breaking apart and re-assembling the data.

Three Key Figures

Our analysis identifies three key figures responsible for AI in education: the stakeholders, the classes, and the interface (depicted in yellow on Figure 9). The stakeholders are the people responsible, in varying degrees, for how AI is integrated into education. The classes are the key educational tool for stakeholders, one of the central locations of the interface, and a key concern with the introduction of AI in education as it relates to the reconfiguration of social classes. The interface represents the way the technology of artificial intelligence is built, accessed, and distributed, and is a site of speculation and concern. Each of these key figures raises a corresponding question, which reflects participants’ differing perspectives and understandings regarding AI in education: What kind of stakeholders? What kind of classes? What kind of technology? The questions are depicted in purple on Figure 9.

We also explored the way relationships between stakeholders, classes, and interface were imagined (in orange on Figure 9), including questions and demands that participants raised. A feedback loop between the stakeholders, the classes, and the interface seems to be the main mechanism of AI’s participation in education.

Stakeholders

Stakeholders include, but are not limited to: students; educational administrators; researchers in fields such as education, technology, and the social sciences; instructors; developers of AI technologies; government and policy makers; non-governmental organizations, and; other interested parties. It is difficult to distinctly identify and label stakeholders, because both education and technology affect, and are affected by, multiple interacting groups and individuals among and between societies and across the globe.

What kind of stakeholder can we envision for AI in education in the 21st century? Our analysis suggests that stakeholders be activists, since participants were fearful of negative outcomes of AI in general, and AI in education in particular, and they wanted those involved to be working for the interests of the students and society at large. Stakeholders must also be adaptable, due to the pace of change inherent in the world of technology and education; stakeholders and their organizations have to adapt to different sociocultural circumstances and contexts. As adaptable activists, stakeholders are ready to disrupt the status quo; they have to be willing to let go of traditional systems of education. Stakeholders need to continually expand and reach of their organizations so they do not leave people behind. The first and most obvious way that participants wanted stakeholders to expand was by improving access to educational services and material. One of the suggested ways of doing this was to improve access to the Internet. One of the stakeholders’ goals appears to be to address gaps in students’ ability to use the interface. When doing the contradiction mapping, we discovered that instead of the distinction between the haves and the have-nots, the distinction between those who are able and those who are not able better represented the concerns among participants.

Classes

Classes and the structure of classes emerged as a key figure of AI in education. This incorporates classes as both an educational tool and classes as a social object. As an educational tool, classes include not only traditional classroom teaching and learning contexts, but also online learning, home schooling, and other non-traditional formats. Socio-economic class is inextricably linked to learning contexts because inequality was seen as built into educational opportunities, especially where access to education propagates or exacerbates social inequality.

The kind of AI-supported educational contexts our analysis of the symposium suggests for the future is one that fosters socio-economic equity. Whatever form AI takes, and however it is implemented for educational purposes, participants wanted classes to be growth oriented and inclusive of diversity. This means that students will be empowered to realize their personal potential and that they will be shown empathy all while fostering human connections. The classes are focused on cherished values, though other than diversity and inclusion, no specific values were discussed. We are not capable of a meaningful assessment of implied values using the data and knowledge at hand, and as discussed earlier there are concerns about the dominance of traditionally ‘Western’ values, as well as questions about which ethical standpoints among many to guide policy making and technological development.

Relationship Between Stakeholders and Classes

If classes are the main tool for education, then they should be a high priority for stakeholders. For example, teachers were often seen as the center of the classroom and they were the supposed executors of new educational techniques. Participants wanted stakeholders, especially through the administration of classes, to act to reduce inequality.

Diversity is one of the most sought after outcomes of AI in education according to participants. Across the board, participants wanted to see diverse measures, backgrounds, and relationships, aiming to reduce inequity.

Interface

Stakeholders need to make several important decisions regarding how different technologies will be used, and some of those technologies will be the main way students engage with their education. We call these technologies an ‘interface’ between the students and their education. The manner of interface may be a personal device, a cloud service, accessed centrally by multiple individuals, or something unforeseen.

Rather than interest in the manner of interface with AI, our concern here is with the kind of technology we should use. Participants demanded that the minimum requirement for technology to be used in an educational context should be that it is built and implemented by people with diverse backgrounds in terms of education and life experience. Technology should thus empower the students and the stakeholders. One of the ways technology could empower these groups is by reducing bureaucracy. However, what remains unknown is how to determine a way of assessing technology beyond how it is made and implemented. Still, the participants expressed that it is an important task to look at technology from many angles.

Relationship Between Stakeholders and Interface

These technologies are seen as so important that their effects on the students and the stakeholders and the quality of education has to be vetted from every angle before being incorporated. One of the ways stakeholders can vet technology is by looking for indicators that this technology acknowledges diversity in how it was developed and how it can be applied.

Relationship Between Classes and Interface

Class interacts with AI through a technological interface. The interface should be built with constructive and collaborative sharing in mind. This includes sharing the opportunities to profit from AI in education as well as sharing knowledge across backgrounds.

The feedback students receive should reflect their strengths and be able to see what students are doing well. This was in stark contrast to the perception of standardized tests and other traditional tools for assessment. Reversing the flow of feedback, data acquired by the stakeholders and the interface should be able to incorporate diverse measures of success and be respectful of privacy.

Seven Expectations For Artificial Intelligence in Education
  • The Stakeholders are Activists
  • Technology must be Vetted
  • Technology Originates from Diverse Backgrounds
  • Feedback Incorporates Diverse Measures of Success
  • Diversity is a Primary Concern
  • This Activity Must Reduce Inequality
  • Artificial Intelligence and Data will be Used

The seven expectations, depicted in red on Figure 9, emerge throughout the data analysis and emerge as agreed upon outcomes for AI and education. These expectations, which cut across the following conclusions, are:

V. CONCLUSIONS

Participants had different understandings and perceptions of both artificial intelligence specifically and education generally. Concerns were raised about the present state of development and deployment of AI, with policy makers and the information technology industry perhaps not sufficiently concerned about, or prepared for, unintended consequences. Whether it is inevitable, or preferable, to have Western-oriented values infused into AI systems at the current stage of AI development was brought up, as were examples of how bias can be built into AI systems unintentionally, and the growing gaps globally and locally between those who have access to technology such as AI-supported education systems and those who do not.

The extent to which an institution of higher education can influence how AI is developed and applied to education will likely be tied to opportunities for stakeholders (including students, teachers, researchers, industry, government, etc.) to work with each other and especially with those outside of their own everyday experience. Recent developments at the University of Toronto, such as the Schwartz Reisman Innovation Centre and the Vector Centre, are examples of how civic minded members of society, governments, and the private industry can work through and with the university system to develop AI systems with attention to social consequences, such as the way in which we interact with each other.

Figure 10: Sample anthropomorphic visualizations of AI in education

There was no consensus on the extent to which AI will displace traditional skills, knowledge and expertise. On one hand, participants were aware of the potential for AI to displace or disrupt workers in almost every field, while on the other it was pointed out that the previously (such as with the industrial revolution) new jobs have emerged to replace those which technological advancements have rendered obsolete.

Figure 11: Hierarchy treemap of poster visualisations, from Nvivo

If there was any emerging directive regarding AI in education it would be to teach students when and where to apply AI solutions. ‘Simulations’ and working together were prized over lecture based learning and standardized testing. For example, one student suggested that “AI needs to help solve collective goods problems,” which would require our shared co-operation. It was argued that educational institutions should give students the opportunity to experiment with artificial intelligence to solve problems, and to learn when, where, why and how to use AI tools. There were also calls to radically change the education system, but the vision for what that would look like is uncertain.

It is interesting to note the way in which participants chose to visualise AI on the flipchart paper. Overwhelmingly, participants chose to draw human-like figures (see Figures 10 and 11) to represent the AI system they proposed to solve an educational problem. These anthropomorphic visualizations, and the applications of emotional support and teacher assistant from the posters, suggest the enduring importance of empathetic social interactions in human learning. While the idea of emotionally intelligent machines may sound like science fiction, this is already a $20 billion industry.

Benefits were not explicitly discussed by participants and panelists as much as concerns and challenges. E21 Consortium member Contact North has pointed out in Ten Facts About Artificial Intelligence in Teaching and Learning that AI is currently being applied to personalised learning, student advising, student assessment, accessibility for students with disabilities, learning analytics, and more.

Symposium participants saw AI as potentially applicable to every aspect of education. Personalized learning, emotional support, and logistic support were the key recurring beneficial applications envisioned by participants. The following extract from a participant poster summarizes various ways that AI can be a benefit to higher education contexts:

Collects information on student goals, assesses student knowledge, tailors content to student interests, tailors content to previous learning, interacts with all students individually, creates tailored learning activities designed to appeal to students’ interests and goals, provides positive feedback.

Attention to ethics and diversity emerged as a strong concern of participants, but also one that complicates the process as humanity has no unified view of how to assuage these concerns, let alone the question of how to accommodate them in poorly understood AI systems. While the question of whether ethical AI is possible remains, the human element recurred throughout the symposium:

Totally unexpected that the AI discussion at #E21sym centred on human skills -ethics, diversity, inclusion, access – and human connection rather than technical skills and doom and gloom.

Solutions to the challenges and concerns were mostly in the realm of diversity and inclusion. Specifically, the formation of groups dedicated to minority access in the development of AI was highlighted by remote panelists. The use of datasheets to describe datasets, and awareness of the importance of diversity, were also put forward as potential solutions. Remote panelist Abebe pointed out that AI techniques can help to reduce inequality, at the same time remote panelist Gebru pointed out that AI can exacerbate inequality or provide opportunities, depending on how it is implemented. There seemed among the participants a general feeling that AI can and should be beneficial for society, and that it could “address enduring societal problems”.

Several participants felt that developments in artificial intelligence that did not address the needs of the entire population of Earth were empty and destructive. They noticed that whatever form a technology takes, there are many who are unable to use it. Speaking to different abilities one participant said:

Using AI in learning, looking at AI bridging gaps, helping people who aren’t able to speak, those are excellent uses but we have to be careful that AI is not used to leave other people behind. People who for one reason or another do not have access to AI can’t use AI. You have to remember just like electricity is available in the walls now, just like writing is available anywhere we need it, learning in AI in the future will be a commodity. We just plug into intelligent data processing whenever we need it and that’s what it is going to be like but like not everybody can read in bright light not everybody can read, we are going to have accessibility issues with AI as well. We want to be aware of that and be ahead of that if we can.

Since proprietary artificial intelligence research is hidden away, it is hard for its creators to claim that it was developed without leaving people out. Participants pointed to both the lack of diversity in AI research and implementation, as well as the lack of global Internet access. All the technological advances in the world won’t be very helpful for those who cannot use them. A participant pointed out during the third panel discussion that instead of focusing on ethics we should focus on needs instead. He argued that although the discussion was focusing on problem solving and modelling, the problems in Ottawa were different than the problems in Canada which are different than the problems in Africa. He encouraged local people to develop local solutions to respond to their needs, rather than relying on a top-down AI. We think that a needs-focused solution is important.

ACKNOWLEDGMENTS

The authors of this report would like to thank the E21 Consortium members for their work as they improve education for the 21st century. We would like to thank the many sponsors who supported and continue to support the improvement of education with the E21 Consortium. Special thanks to the participants and panelists for sharing their knowledge and insights with us, may it continue to help us grow in the future. An extra special thanks to the organizers for their indispensable work, in no particular order: Marc Villeneuve, Gisèle Richard, Emilie Newman, Marc Bélanger, Manon Drouin, Chrystia Chudczak, Andrew Barrett and his team at Shopify, Carleton students and employees and Shaily Zolfaghari who volunteered at the reception. We also want to thank Patrick Labelle for his support in data analysis.

Last but not least we’d like to thank Aline Germain-Rutherford and Richard Pinet for their inspiring leadership.