Shaping Artificial Intelligence for Tomorrow: Youth Insights and Strategic Recommendations


Introduction

In an era where artificial intelligence (AI) is rapidly reshaping society, Canada’s Chief Science Advisor’s Youth Council (the Youth Council) takes a critically engaged stance on the profound challenges and opportunities AI presents. Unlike past technological advancements, which acted primarily as tools to enhance human productivity, AI represents a paradigm shift with the potential to exceed human capabilities in ways that demand urgent and thoughtful consideration. AI's rapid evolution and unpredictable trajectory necessitate a distinct and proactive approach, one that acknowledges both its transformative potential and its risks.

This report addresses a fundamental question: How do young scientists perceive the challenges and opportunities of AI, and what should be done about them? As emerging leaders in science, members of the Youth Council are uniquely positioned to assess AI’s impact on research, education, and the scientific workforce. We recognize that AI’s influence will extend far beyond automation, altering the very nature of scientific inquiry, decision-making, and ethical responsibility.

To explore these concerns, this report is structured around key areas where AI is expected to have the most profound effects on students, early-career researchers, and future generations of scientists. Each section examines critical aspects of AI’s role in science, and highlights the pivotal role of youth in shaping AI’s integration into research and innovation in ways that are responsible, equitable, and forward-thinking.

Ultimately, the Youth Council’s perspective underscores a key message: AI is not just another technological revolution. It is an innovation that demands new frameworks, new ways of thinking, and new forms of scientific leadership. Through this report, we call on the scientific community, policymakers, and industry leaders to consider the perspectives of youth as the future of AI is navigated.

top of page


Calls to Action

This section centers on the importance of including youth voices in shaping the future of AI. As the next generation of leaders, innovators, and stakeholders, young people will be directly affected by the technologies being developed today, making it essential for their perspectives to be actively incorporated into decision-making processes. Through this report, the Youth Council calls for greater youth involvement in AI policy discussions and decision-making, ensuring that the voices of youth are not only heard but valued. Alongside this, we highlight several critical issues that need urgent attention: bridging the digital divide, ensuring equitable access to AI technologies, and embedding principles of equality, diversity, and inclusion (EDI) into AI policy. We also emphasize the need for stronger privacy protections, ethical guidelines for the use of open-source data, and prioritizing AI safety on the global stage. These calls aim to ensure that the development of AI is transparent, inclusive, and aligned with the values of a diverse society, with youth leading the way in advocating for a fair and ethical AI ecosystem.

1. Address the digital divide and integrate EDI:

  • Prioritize initiatives that ensure equitable access to AI technologies for all, particularly in rural and underserved regions, where access to telecommunications and AI tools may be limited.
  • Ensure EDI principles are embedded in AI policy-making processes to represent diverse perspectives and needs, including those of youth.
  • Enhance accessibility to improve access for individuals with disabilities (e.g. speech recognition, assistive technologies, etc.).

2. Involve youth in decision-making:

  • Establish formal mechanisms for the explicit and systematic involvement of youth in AI policy discussions and decisions.
  • Ensure AI literacy is integrated into the education of the next generation of young Canadian students and scientists.

3. Establish ethical regulations for open source and public datasets in AI training:

  • Develop and implement clear regulations for the ethical use of open source and publicly available data in training AI models (LLMs), ensuring transparency and accountability. For example, by creating a “not for use in datasets” licensing scheme for software text.

4. Enhance privacy protections:

  • Advocate for better-defined privacy laws that protect individuals’ data in the development of AI and machine-learning technologies.

5. Establish an ethical framework to guide the short and long-term development and deployment of AI:

  • Create a comprehensive ethical framework to guide the development and deployment of AI, ensuring it aligns with societal values. This framework should grapple with the tricky question of how to live in a world where AI capabilities approach and perhaps exceed those of humans.

6. Improve public education on AI:

  • Implement educational programs to properly inform society about AI technologies, their benefits, how to better protect your data/privacy and their implications for the future.

7. Ensure consent for data collection:

  • Enforce regulations preventing companies from collecting personal data without informed consent from individuals.

8. Prioritize AI safety in international leadership:

  • Position AI safety as a key focus area in Canada’s G7 leadership discussions, promoting global standards for safe AI development.

9. Develop university guidelines for the use of AI:

  • Encourage universities to develop and disseminate clear directives and guidelines regarding the ethical use of AI by students.

top of page


Current Landscape of AI

According to the UK Government’s 2023 policy paper, “A pro-innovation approach to AI regulation,” AI systems are defined as “products and services that are ‘adaptable’ and ‘autonomous’” (Gajjar, 2024). They can perform tasks requiring human-like intelligence, such as understanding language, recognizing patterns, and learning from data. AI algorithms are used to process information, solve problems, and adapt to accomplish these tasks. Key terminologies, including machine learning (ML), deep learning (DL), large language models (LLM), and computer vision, can be found in the UK Parliament’s AI Glossary (Gajjar, 2024).

AI capabilities are often categorized along a spectrum from narrow to general. Implementations of AI that we see deployed today are "narrow AI" because they are designed for specific, yet still complex, tasks. For example, an AI model trained for facial recognition may not necessarily be suitable for drug discovery. At the other end of the spectrum, "Artificial general intelligence" (AGI) is a theoretical form of AI capable of understanding, learning, and performing any intellectual task across diverse and unfamiliar domains – including abstract reasoning – at a human-like level. The quality of the output, and the generality of the input conditions for which the model can produce accurate output can be used to quantify how “intelligent” the AI model is.

Key Milestones

To better understand the impact and modern issues relating to AI, it is important to understand the innovations leading up to AI’s ubiquity today. Since the inception of computing, scientists have long been intrigued by the potential for machines to challenge human reasoning.

  • 1949: Dr. Alan Turing considers the possibility of human-like reasoning for machines, as he paved the way for conceptualisation of AI with the Turing Test.
  • 1956: John McCarthy coins the term “artificial intelligence,” which is widely regarded as the birth of AI as a formal field of study. The following years saw the development of pivotal research that laid foundations for modern neural networks, including Frank Rosenblatt’s Perceptron model based on the human brain (modeling artificial neural networks based on biological models) (Rosenblatt, 1957), and Geoffrey Hinton’s work on training artificial neural networks with backpropagation (modeling neuron’s signaling computationally) (Rumelhart, Hinton, & Williams,1986).
  • 1997: IBM’s Deep Blue computer defeated the world champion in chess Garry Kasparov at the time in a six-game competition, where three matches resulted in a draw and Deep Blue won two other matches. Deep Blue’s success represented a key milestone in the evolution of AI by demonstrating the possibility for computers to rival human game-playing reasoning.
  • Early 21st century: Enhanced computational capacity and innovative ML techniques emerge. Decision tree algorithms, neural networks, and support vector machines lay the foundation for more sophisticated AI systems, which were made more useful and capable as a result of greater access to big data. The combination of large datasets (big data) and more powerful computing resources enabled the emergence of deep learning (DL) techniques. At this time, the Canadian Institute for Advanced Research (CIFAR) was one of the earliest and largest government funders of AI research.
  • 2004: CIFAR's Neural Computation & Adaptive Perception program supported pioneering researchers including Drs. Geoffrey Hinton, Yoshua Bengio, and Yann LeCun — all three of which would go on to win the 2018 Turing Award, with Hinton also winning the 2024 Nobel Prize for “foundational discoveries and inventions that enable deep learning with artificial neural networks” (Nobel Prize, 2024).
  • 2012: Canadian scientist Geoffrey Hinton and his lab at the University of Toronto publish AlexNet — the first deep learning model capable of near-human performance in image recognition, often referred to as “Computer Vision” (Krizhevsky et al., 2012). The success of computer vision research underscored AI’s ability to derive insight and meaning from large and non-trivial data (Khan et al., 2021). This has translated into the pharmaceutical sector, where such unsupervised learning techniques have resulted in novel drug discoveries (Vamathevan et al., 2019).
  • 2015: DeepMind’s AlphaGo is the first AI system to beat expert humans at the game of Go (Silver et al., 2016). This model was trained on both large volumes of human game data (using supervised learning techniques) and self-play data.
  • 2017: The AlphaGo Master version of the software finally beats the world’s top ranked player three times in a row before the developers retire the system. A vital component of AlphaGo’s success was the result of innovative reinforcement learning (RL) algorithms; a ML technique for training autonomous agents that learn by interacting with their environment and receiving feedback in the form of rewards or penalties. RL was in fact pioneered by Canadian scientist Richard Sutton at the University of Alberta and has since gained prominence as a result of other AI applications such as the development of autonomous systems like vehicles and robotics, healthcare decision-making optimization, and other resource management problems (Kiumarsi et al., 2017).
  • 2022: OpenAI launches ChatGPT and kickstarts the modern era of LLMs. These breakthroughs have been enabled by massively scaling neural networks called Transformers (Vaswani et al., 2017, Brown et al., 2020), trained on internet-scale data (i.e. exabytes worth of data from the internet).
  • 2024: Sora, released by OpenAI, ushers in a new era of generative AI by enabling realistic, high-resolution video generation from text prompts. This demonstrates the accelerating convergence of generative AI across text, images, and video, raising new opportunities and ethical considerations for media production, misinformation, and digital creativity.

From its initial theoretical developments in the 1950s, to dedicated academic research in the 2000s, AI is finally in the hands of millions of people around the world each day. While these present many promises and challenges, Canadian institutions and researchers continue to lead developments in this field through the Pan-Canadian Artificial Intelligence strategy. This is led by organizations such as the Canadian Institute for Advanced Research (CIFAR), the Canadian Artificial Intelligence Safety Institute, and research institutes such as Mila, Alberta Machine Intelligence Institute (AMII), and the Vector Institute.

top of page


The Advantages of AI for Canadian Youth: Impacts on the Healthcare and Education Fields

The Youth Council is composed of 16 members from diverse scientific fields, with half of them working in the healthcare sector. All members are college or university students, or recent graduates, who have firsthand experience with the transformative impact AI can have in healthcare and education. In this section, we describe the advantages of AI for Canadian youth and future generations in these two key fields.

Healthcare

AI’s capability to process vast datasets, identify patterns, and predict outcomes is revolutionizing key aspects of healthcare - such as diagnosis, treatment, and disease/care management - while also serving as a model for enhancing productivity and efficiency in other industries. Many Youth Council members specialize in health-related scientific fields such as medicine, microbiology, public health and epidemiology. We have personally observed the quickness in how AI has been embraced by the health sciences. As a generation with access to unprecedented amounts of health data, youth are uniquely positioned to leverage AI-driven insights for more personalized and patient-centric healthcare. Furthermore, with an increasingly strained healthcare system and a rapidly aging population, AI-enabled efficiencies will be essential to ensuring sustainable and effective care delivery.

The integration of AI into healthcare systems offers numerous benefits, both for care, delivery systems, and the general public. At a time where Canada's health system is experiencing challenges (i.e. rising costs, limited resources, and an aging population), AI presents an opportunity to address some of these pressing needs. Canada spends over 11% of its GDP on healthcare (CIHI, 2022), and further increase is expected. AI's ability to optimize resource allocation, improve operational efficiencies, and reduce unnecessary interventions presents a critical opportunity to reduce costs while improving quality of care. Within the context of predictive modeling in medicine and public health, applications for AI include predicting disease states, patient survival, readmissions, and chronic care management; improving end-of-life care and mortality predictions; and automating clinical decision systems (Yang, 2022; Noorbakhsh-Sabet et al., 2019). This fundamentally improves the collective understanding of disease trajectories and care management to support the delivery of more relevant treatment modalities, while also addressing bottlenecks in the delivery and administration of care.

AI has already demonstrated the potential to exceed human performance in certain medical tasks, such as diagnosing diseases, predicting outcomes, and analyzing vast amounts of data, often more quickly and accurately than individual healthcare providers (Lehharo, 2024). Indeed, research has shown that AI can achieve or surpass the performance of human experts across various medical-image analysis tasks, such as breast cancer screening (McKinney et al., 2020; Hickman et al., 2021). Although AI has the potential to outperform individual healthcare providers in specific tasks, its primary role is not to replace them but to serve as a supportive tool, enhancing decision-making, streamlining treatments, and enabling healthcare providers to deliver more effective and patient-centred care. By enhancing diagnostic accuracy and facilitating early disease detection, AI can reduce the need for expensive treatments or emergency care down the line, which directly impacts overall healthcare costs (Wang, 2024). AI can also be pivotal in improving chronic disease management, a major driver of healthcare spending, by predicting and managing conditions more effectively (Ramezani et al., 2023). Additionally, AI can be used for surveillance activities such as by public health organizations to monitor, track, and assess emerging patterns and trends of disease, allowing for early detection and targeted interventions (Shaban-Nejad et al., 2022).

Furthermore, AI can streamline administrative processes to significantly streamline the time clinicians spend on routine tasks, which could alleviate workforce burnout and improve productivity. In fact, the early implementation of AI in Canada’s healthcare system has shown promising results in reducing administrative workloads for practitioners— AI-powered scribe technology in Ontario has cut paperwork time by 70% to 90%, saving family doctors an average of three to four hours weekly (OntarioMD, 2024). Physicians, who typically spend about 10 hours per week on administrative tasks, could significantly benefit from AI tools, allowing them to focus more on patient care and alleviating workforce strain. Overall, these efficiencies suggest that broader AI adoption could greatly enhance productivity across Canada’s healthcare system and perhaps compensate for a shortage of healthcare professionals in Canada (Queen’s University, 2024).

For the general public, the integration of AI in healthcare holds the potential to significantly improve access to timely care, enhance patient outcomes, and contribute to a more equitable healthcare system. AI-powered tools that prioritize high-risk patients, streamline referrals, and optimize care delivery could greatly benefit underserved populations, including those in rural and remote areas of Canada where healthcare access is often limited. Additionally, by allowing people to gain access to personalized insights and recommendations, AI can empower individuals to manage their health more effectively and increase scientific literacy. ChatGPT has been used to rewrite educational materials about pediatric head and neck conditions in Otolaryngology at a 5th-grade reading level to improve comprehension and readability for patients. In adults with knee osteoarthritis, an AI enabled decision aid was used to educate patients on their care and analyze alternatives to provide personalized care and better outcomes.

These platforms may use chatbots (i.e., a computer program designed to simulate a conversation with human users), virtual health assistants, or interactive tools to deliver health education materials, medication reminders, self-care tips, and lifestyle recommendations tailored to patients’ specific needs and preferences. For example, AI-driven remote patient monitoring (RPM) platforms can engage and educate patients by providing personalized health insights, feedback, and recommendations based on individual health data (David, 2024).

The increased usage of wearables and other health-monitoring devices is producing vast amounts of real-time data that has the potential to unlock early disease detection and more proactive management of chronic conditions (Babu et al., 2023). The Apple Watch, which has received approval from the U.S. Food and Drug Administration (FDA) for its heart monitoring tool, qualified it as a Medical Device Development Tool (MDDT) (FDA, 2023). This designation exemplifies the potential of the wearables in generating actionable health data, which can be utilized by patients to manage their own health, as well as support clinical trials by providing critical data that may support early diagnoses. As such, AI has the potential to reduce health disparities by democratizing health literacy and improving care access by making healthcare more accessible, personalized, and efficient.

Education

Given that the Youth Council consists of students ranging from college to postdoctoral levels and early career researchers who earned their degrees within the past five years, we recognize that AI has rapidly and seamlessly integrated into classrooms and study sessions. Its integration has greatly reduced the time devoted to assignments and homework. Some students use AI as a powerful search engine that provides instant answers, while others rely on it as a 24/7 “personalized tutor”. Regardless of its application, AI is reshaping how students learn and will undoubtedly continue to do so for future generations. Its influence is already evident among Youth Council members as we progress in our professional journeys.

AI-powered tools can adapt lesson plans to individual learning styles and speeds, ensuring that students receive the support they need to succeed. Instant feedback and targeted resources help learners overcome challenges more effectively, making education more engaging and accessible for diverse groups. For students with disabilities or learning difficulties, AI can provide tailored solutions that address their unique needs, fostering greater equity in education. Although the positives of AI use in education are grand, it is crucial that human-led instruction remains central to education, ensuring that students develop the holistic skills required for a well-rounded life (Ali et al., 2024).

The use of AI in higher education has been controversial, with proponents advocating for improved learning productivity and automating tasks, while opponents are hesitant to embrace its use, voicing concerns about the over-reliance on technology and loss of critical thinking and problem-solving skills among learners. With AI becoming a core component of healthcare delivery, students in medical, nursing, and allied health programs will need to be trained to work alongside AI systems and develop the technical skills and ability to interpret and leverage AI-driven insights in clinical practice. Already, AI tools (particularly LLMs like ChatGPT) are reshaping medical training by enhancing educational content and creating realistic clinical scenarios, assisting in medical writing, and providing personalized learning experiences (HMS, 2023). However, these approaches are limited by median scores of 5.5 in accuracy across all medical specialty content, and learners are asked to tread carefully when relying on chatbots completely. By also improving accessibility, personalization, and simulation in medical education, AI is enhancing the preparation of students for real-world medical practice.

top of page


The Challenges of AI

As AI continues to shape the future, its adoption presents unique challenges that must be addressed to mitigate risks, safeguard ethical principles, and ensure a balanced approach to innovation. In this section, we discuss four key areas where we believe AI will be most impactful and would require in-depth discussions for appropriate adoption and use. These challenges go beyond the scientific fields in which Youth Council members specialize, as AI affects the entire world that future generations will grow in.

Geopolitical Challenges

For young Canadians, geopolitical tensions surrounding AI are not just distant headlines—they are deeply personal and shape the opportunities, stability, and ethical landscape we inherit as we prepare to enter the workforce. While AI innovations in one country can, in theory, contribute to global progress, in practice, geopolitical rivalries often lead to restrictions on technology sharing, talent mobility, and access to critical AI infrastructure. The global race for AI dominance, led by the United States and China, thus poses immediate and long-term implications (Jones, 2025), as countries increasingly prioritize national security and economic interests over open collaboration. As Canada works to assert its role in this rapidly shifting landscape, we face the challenge of balancing collaboration with powerful allies while safeguarding our technological sovereignty.

The monopolization of AI technologies by a handful of countries raises concerns about international equity and access to innovation. While many AI technologies are open-source, monopolization can still occur through control over key infrastructure, vast datasets, and advanced computing power—resources that are concentrated within a few dominant nations and corporations. The ability to develop and deploy cutting-edge AI at scale depends not just on access to software, but on high-performance computing, specialized chips (like those from Nvidia and Taiwan Semiconductor Manufacturing Company (TSMC)), and massive proprietary datasets, which remain out of reach for many countries. This concentration of AI capabilities risks sidelining other global leaders like Canada, limiting our role in shaping AI governance and leaving young Canadians dependent on foreign-controlled tools and infrastructure. For Canada, this dynamic is particularly precarious, as our nation navigates its close ties with the U.S. while maintaining trade relationships with other global powers. Young Canadians want to ensure that our country remains a leader in ethical AI development, promoting fairness on the global stage while protecting our own economic independence.

The Canada-U.S. relationship, while a source of strength, also presents unique challenges for our generation. American dominance in tech innovation—bolstered by Silicon Valley giants—offers Canadian youth access to collaboration, investment, and cutting-edge research. However, this proximity also creates risks of dependence. Deloitte Canada’s 2023 report, commissioned by CIFAR, AMII, Mila, and the Vector Institute for Artificial Intelligence, highlights that while Canada has achieved significant milestones in AI talent concentration, research publications, and venture capital investments, more decisive action is required to ensure the integration of AI across the economy. The report emphasizes that leadership in AI could shift to countries that invest more aggressively and harness its potential (Deloitte Canada, 2023). If Canadian talent and startups continue to flow southward in pursuit of higher salaries and greater resources, our domestic AI ecosystem may struggle to sustain itself (Koetsier, 2024). This talent drain is acutely felt by young Canadians entering a science, technology, and innovation labour market that often undervalues their contributions compared to international competitors.

While Canada’s immigration policies have been designed to attract international talent, the country must work harder to retain its homegrown expertise by offering competitive funding and career opportunities within its borders (Singer, 2024). The rise of remote work has created new possibilities, allowing young Canadians in AI-related fields to access Silicon Valley-level salaries while remaining in Canada. While this mitigates some of the risks of physical talent migration, it also raises concerns about long-term investment in Canada’s own tech ecosystem. If Canadian workers contribute to foreign companies without reinvesting their expertise into domestic firms, research institutions, or startups, our ability to build a self-sustaining AI industry could be compromised. Moreover, U.S. policies such as export controls on AI technologies (U.S. Department of Commerce, 2025) could indirectly restrict Canada’s ability to engage in the global AI marketplace. These restrictions appear designed to address competition from major global players, potentially impacting Canada’s access to key technologies. As youth, we worry about Canada being caught in the crossfire of global trade disputes, particularly as we seek meaningful careers in emerging industries.

Canada’s proactive stance on ethical AI development and initiatives like the Roadmap for a Renewed U.S.-Canada Partnership (Office of the Prime Minister of Canada, 2021) offer hope. However, young Canadians want more than high-level agreements; we need tangible investments in homegrown innovation, competitive funding for research, and pathways to retain Canadian talent. Initiatives driven by the Government of Canada, like the Pan-Canadian AI Strategy ($208 million over 10 years), the upcoming AI Compute Access Fund (up to $300 million to provide small and medium-sized enterprises (SMEs) with affordable access to high-performance computing resources), and the AI Assist Program ($100 million through the National Research Council’s Industrial Research Assistance Program (IRAP) to help SMEs integrate AI technologies) are steps in the right direction. However, ensuring our generation thrives in a competitive, globalized AI-driven economy requires continued investment and strategic support for emerging AI leaders.

The intersection of AI and military applications adds another layer of complexity. Canada and the U.S. are working through NORAD to modernize their defence systems with AI technologies, including threat detection (Department of National Defence, 2024). While these collaborations aim to bolster national security, they raise profound ethical questions for our generation.

Young Canadians are deeply concerned about the implications of military AI. The potential for autonomous weapons, reduced human oversight, and accidental escalations in conflict runs counter to our values of peace and human dignity (Robins-Early, 2024; Morgan et al., 2020). As global citizens, we want Canada to champion international agreements that govern the ethical use of AI in defence, ensuring transparency, accountability, and human oversight.

Youth Council Initiative: Shaping Global Policy on Autonomous Weapon Systems

Members of the Chief Science Advisor’s Youth Council recently submitted a policy brief to Think7, the official engagement group of the G7, advocating for a structured framework on Autonomous Weapon Systems (AWS). Their proposal, A Coordinated Tier System for Autonomous Weapon Systems, outlines a five-level system distinguishing human oversight from machine-driven decision-making, ensuring responsible AI use in military contexts.

Authored by Harsh Sharma, Kevin Kasa, Chloé Currie, Julia Messina-Pacheco, Louis-Alexandre Fournier, Matthew Taylor, and Pahul Singh, the brief emphasizes international collaboration to mitigate risks and guide ethical AWS development. If selected, it will be published on the Think7 Canada website and could influence the G7’s final communiqué.

With growing geopolitical tensions and the rapid advancement of AI, the Youth Council members stress the urgent need for ethical governance of autonomous weapons, ensuring global stability and adherence to humanitarian principles.

Environmental Impacts

For young Canadians, the environmental impacts of AI are deeply tied to our future quality of life. Large-scale AI models require vast amounts of computational power, resulting in significant energy consumption and carbon emissions that contribute to climate change, as well as the demand for physical space and fresh water to maintain data systems (Tulloch, 2024). Training a single large AI model, such as those used in natural language processing (e.g., GPT-3), can take months of computational time. During this period, the energy-intensive process generates more than 62,000 pounds of CO2—nearly five times the lifetime emissions of an average car (Strubell et al., 2019). As AI adoption scales globally, its carbon footprint is projected to grow substantially, posing challenges to international climate commitments like the Paris Agreement, which aims to limit global temperature rise to below 1.5°C (International Energy Agency, 2023). The energy-intensive nature of AI infrastructure must therefore be accounted for as part of global emissions reduction strategies.

AI’s environmental impact raises critical questions about global accountability in the fight against climate change. The United Nations Sustainable Development Goals (SDGs), particularly Goal 13 (Climate Action), emphasize the shared responsibility of nations, industries, and policymakers to curb emissions. However, AI’s rapid expansion risks outpacing regulatory frameworks, making transparency and sustainability crucial. The UNESCO Recommendation on the Ethics of Artificial Intelligence (2022) highlights the importance of developing AI systems that align with sustainability principles and international climate commitments. Without proactive measures, such as mandating transparency in AI energy consumption and setting international standards for green AI, this technology risks undermining the very sustainable goals it could help achieve (UNESCO, 2021).

Integrating AI innovations that reduce environmental impacts is crucial for sustainable development. A notable example is China's DeepSeek, an AI startup recognized for its cost-effective and efficient models. DeepSeek's advancements have led to AI applications with lower energy consumption, thereby reducing carbon emissions associated with large-scale computations (Truby, 2025). Furthermore, AI technologies like DeepSeek’s can be instrumental in environmental monitoring and management. For instance, deep learning frameworks have been developed to map solar power plants in China using satellite imagery, facilitating better planning and expansion of renewable energy infrastructure (Hou et al., 2019). By adopting and further developing such AI innovations, Canada can enhance its commitment to environmental stewardship, ensuring that AI serves as a tool for sustainability rather than a contributor to ecological degradation.

Canada is uniquely positioned to lead in sustainable AI development, particularly through the establishment of green data centres. Provinces like Quebec and British Columbia, endowed with abundant renewable energy resources, offer a pathway to significantly reduce the carbon footprint of AI technologies. The concept of "green AI" has emerged, advocating for energy-efficient AI models trained using sustainable power sources (Sim, 2023; MIT Technology Review Insights, 2023). Shifting to renewable-powered data centres and optimizing model efficiency could substantially cut emissions while ensuring AI remains a driver of innovation rather than a contributor to climate degradation.

Integrating green data centres not only aligns with environmental goals but also offers substantial economic benefits. By leveraging renewable energy sources and implementing energy-efficient technologies, these centres can reduce operational costs and enhance corporate social responsibility (MIT Technology Review Insights, 2023).

Furthermore, the global demand for sustainable infrastructure presents an opportunity for Canada to attract international AI leaders. By offering incentives and fostering an environment conducive to green data centre development, Canada can position itself as a hub for sustainable AI innovation.

As young Canadians, we see this as an opportunity to build a green AI sector that aligns with our generation’s commitment to environmental stewardship. Joint Canada-U.S. initiatives, such as creating green data centres and deploying AI for environmental monitoring, could set global standards for sustainability. These efforts would not only make meaningful progress towards addressing the climate crisis but also create sustainable job opportunities for young Canadians in emerging fields that prioritize climate-conscious innovation. By investing in green AI technologies, Canada can strengthen its leadership in both AI innovation and climate accountability, demonstrating that economic progress and environmental responsibility can go hand in hand.

The Profound Impacts of Misinformation

Misinformation is an issue that has long been discussed, especially on the internet where the scale of opinions can crowd the truth. However, the use of AI online has made the creation and believability of mis- and disinformation increasingly easy. As recently as 2022, AI-generated misinformation made up only a small fraction of manipulative content. However, a new preprint indicates that AI-generated images alone are now nearly as prevalent as general text doctoring and image editing (Dufour, 2024). While standard image editing can alter the context and details of existing content, AI-generation can completely invent audio and visual to create a situation that does not exist with little effort.

Since the popularization of AI, there have been persistent concerns about its potential to interfere with elections. In 2024, there were attempts across the world to use “deepfakes” (i.e., a video in which a person’s face or body is digitally altered to make them appear to be someone else or to say or do things they did not) in order to influence political elections. For example, a deepfake replicating President Joe Biden’s voice was used to urge voters not to participate in the New Hampshire Democratic primary election (Schneier & Sanders, 2024). While a turnout is more immediately measurable, it will take time to unravel how AI-generated information impacted decisions. The fact that there are so many viral examples leads us to question how many more times it occurred without being detected or noticed on a larger scale.

The current internet landscape makes it difficult to separate fact from fiction and AI-generated content from that created by humans. This is true even on search engines such as Google’s AI Overview tool, which is designed to summarize answers to user questions but has been shown to display incorrect and even dangerous information as fact (Harding, 2024). The spread of misinformation is further exacerbated by social media platforms removing fact-checking features from their platforms. Additionally, Meta recently announced that they are making official AI accounts on Facebook and Instagram (Westfall, 2024), which may further complicate online interactions.

AI-generated content can also have very personal implications. People can use these platforms to generate fake, sexually explicit images of anyone, from celebrities to the general public. In schools, such images have become yet another way for teenagers to bully one another leading to devastating consequences (Cochran, 2024; Tech News Briefing 2024). In some cases, victims were punished by their schools because administrators believed the fake images were real (Cochran, 2024). While it was already possible to make fake, explicit images, the rise of AI has now made it unnervingly easy (Pfefferkorn, 2024). Unfortunately, AI use has outpaced the legal system and currently there are no clear paths to justice. In the meantime, this practice disproportionately affects women and girls (Schmunk, 2024). Bullying each other through the production of “deepfakes” is also not limited to explicit images, as these tools can be used to put someone in any kind of compromising or embarrassing position. Bullying is something that primarily impacts youth, and therefore this aspect of AI will be especially important for the youngest generations. Additionally, as bullying is primarily dealt with by adults, it is imperative that they have the knowledge, skills and legal power to take action should this occur.

Overall, youth are particularly vulnerable to any changes, both positive and negative, to the internet as they are active participants in the digital ecosystem. In Canada, more than 99% of youth (ages 15-24) use the internet daily (Statistics Canada, 2022), and teenagers now spend an average of 4.8 hrs per day on social media use alone (Jonathan Rothwell, 2023). It is often assumed that youth are more capable of handling technological advances than older generations however, whether they are better at identifying AI-generated content remains unclear. While youth report encountering AI on social media more frequently and express greater confidence in their ability to identify it, there may be many factors contributing to this data (de Léon, 2024). This is important to understand as identification of AI would at least mediate some of the harms.

Youth will be overwhelmingly impacted by changes to the education system triggered by the popularization of AI. There is a persistent concern that without explicit and strict standards for the use of AI in education, the value of education itself could be undermined (Rowsell, 2024). Without accurate detection and standards-enforcement regarding the use of AI, the value of a degree may be lessened if employers assume widespread reliance on generative tools. Unfortunately, a recent preprint indicates that AI detection tools may be accurate only 39 percent of the time, a troubling figure given that these tools are being used to justify flagging students for misconduct (Perkins, 2024). Therefore, not only do we not have an accurate way of identifying AI, students may be falsely accused as well. With so many pressures on students, it may be difficult to resist taking the easy way out and denying themselves the opportunity to truly learn valuable skills. Without clear and consistent policies from educational institutions, issues related to student protection and fairness may arise—particularly in cases involving plagiarism. Although educational institutions have clear guidelines on plagiarism, not all of them include proper consideration of the use of AI platforms.

Inappropriate use of AI is also observed in research. Fake research papers are on the rise and only exacerbate misinformation and diminish public trust in science. In an environment where ‘publish or perish’ is a frequent chant, it is predictable that some turn to generative AI to either fabricate data or write entire papers. This undermines real scientific research and may accelerate distrust in science and scientific institutions. Youth cannot afford a rift in the relationship between science and society especially in the face of worldwide problems, such as climate change and the rise of vaccine-preventable illness reaching critical mass.

Given that AI is here to stay, we encourage ongoing, society-wide reflections to ensure that our shared values and priorities are recognized and that policies remain flexible and adaptable in this rapidly evolving landscape.

LLM technologies are trained on imperfect datasets. This introduces the possibility for biases, inaccuracies, and incomplete assessment – and therefore it remains critical that trained human beings monitor the implementation of AI.

During the covid pandemic, while there were hopes that AI tools could be used to improve patient outcomes, none made a marked difference after use in a hospital setting (Heaven, 2021). For example, an AI tool recognized the font used in hospitals with higher caseloads and this font, in turn, became a predictor of risk. Of course, this is an easily fixable error – but it is one that needs to be caught and identified first. For example, an algorithm for detecting tuberculosis boosted the score of X-rays taken on portable machines (Harris, 2019). With the increased implementation of these AI tools in healthcare, oversight and transparency will be exceedingly important.

In medical AI models, the lack of diverse and representative patient data risks leading to inaccurate results for patients from marginalized groups, such as women and people of colour, who are already likely to suffer from health inequity (Mayo Clinic Press Editors, 2024; Hamzelou, 2023). This was proven to be the case in an FDA-approved AI-algorithm for mammogram analysis, with Black and older patients more likely to generate false positives (Nguyen & Ren, 2024). This study did not assess accuracy or false negatives, which leaves concerning questions about whether an FDA-approved algorithm not only has bias built into it, but that it could cause the algorithm to miss a cancer diagnosis. Even if accuracy is unchanged, an increased rate of false positives still has negative impacts on patients, such as unnecessary tests, increased medical procedures, and increased pressures on the healthcare system. Cumulatively, this has the potential to worsen existing healthcare disparities.

While some success has been observed in reducing menial day-to-day tasks, there are reports that medical centres have found AI transcription tools inventing sentences in over half of use cases (Burke & Schellmann, 2024). This further highlights the necessity of human oversight, and may actually increase the work for physicians if they need to review every transcript.

The point is not that AI will never achieve these tasks. AI is improving at light speed and it is easy to see a future where AI algorithms will successfully improve the diagnostic process for many diseases. However, with such incredibly rapid advancement, it will be critical to implement caution, human oversight and consideration of what data is used to train these systems over long periods of time. It is not helpful to overstate the possibilities without considering the potential harms, and that oversight by experts is still necessary.

Confidentiality, Privacy, and Intellectual Property

Generative AI tools like ChatGPT raise questions about ownership and attribution of creative outputs. The lack of clear guidelines around intellectual property affects industries like media, design, and software development, where youth are often early adopters and innovators. However, discussions about issues concerning art and literature rarely arise, a significant gap, given the importance of art and literature to Canadian culture and their roles in the economies of Quebec and British Columbia (Episode 26: Who Owns AI-Generated Creations (and Why You Should Care), 2023). Under current law, AI-generated art is not protected under copyright. However, one may make the case that creativity on the level of the artist is required for the generation of good AI art. There appears to be a grey area, as one could imagine a world where AI is treated like a resource or art supply like any other and no different than the paint that an artist buys to paint a painting. However, the ability of generative AI to complete portraits and entire paintings in moments makes it a far more powerful tool than any art supply available in the past. It is therefore our recommendation that we treat AI as an artistic collaborator and that some credit be given to the people who created the specific AI software that was used.

Canadian AI researchers and companies, while world-class, often face challenges related to protecting their innovations. For instance, companies developing AI-driven products or software risk having their proprietary algorithms or data sets misappropriated by larger international firms. Addressing these challenges requires modernizing Canada’s IP frameworks to account for AI-generated content and ensuring that Canadian creators and businesses are protected in a rapidly evolving global market. Canada’s leadership in this area could also extend to working with international partners, including the U.S., to harmonize intellectual property laws and promote a balanced approach that fosters innovation while safeguarding creators’ rights. Considering the impact of the creative arts on Canadian culture, it is imperative that we create unified intellectual property law to fix the various challenges listed above and have a proactive approach to intellectual property law and AI.

The exponential growth of data required for training AI systems raises significant concerns about user privacy, particularly for younger generations, who are prolific contributors to digital platforms. Data collection practices often operate in a loosely regulated space, with individuals unknowingly consenting to data harvesting methods that compromise personal information. Transparent and comprehensive data governance policies are paramount in ensuring the protection of user privacy, particularly in light of the increasing frequency of privacy breaches and unethical data usage. The data practices of companies like Google, Meta, and TikTok have drawn significant scrutiny, with TikTok, in particular, facing allegations in 2023 of improperly sharing sensitive user data with Chinese authorities (Fung, 2023). These practices exacerbate concerns regarding consent and ownership, especially for youth, who are often unaware of how their personal data is being exploited. Advocacy groups such as OpenMedia have raised alarm over these issues, calling for the modernization of Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) to address the emerging challenges posed by AI-driven data collection and use (OpenMedia, 2021).

Concerns relating to social profiling and discrimination have been raised about the potential for AI systems to perpetuate and even amplify societal inequalities. This is especially evident in systems used for surveillance, targeted advertising, and hiring practices, which can be influenced by biased algorithms that disproportionately impact marginalized groups. The rise of AI in these sectors has led to ethical dilemmas surrounding consent, autonomy, and the reinforcement of systemic inequities. In the wake of political shifts, such as the second-term election of President Trump and the subsequent executive orders that sought to limit Diversity, Equity, and Inclusion (DEI) initiatives and dismantle affirmative action (The White House, 2025), the risk of AI being used to entrench discrimination has grown. The implementation of AI in such sensitive domains—like hiring and law enforcement—raises the risk of increasing racial and social divides. Studies have shown that AI systems can produce biased and discriminatory outcomes, including covertly racist decisions influenced by factors such as dialect and ethnicity (Hofmann, 2024). This underscores the importance of ensuring that AI technologies are developed and deployed with ethical considerations at the forefront, as they can perpetuate harmful societal biases when left unchecked.

Furthermore, AI-driven surveillance technologies, such as facial recognition systems used by companies like Clearview AI, have sparked global backlash due to their invasive nature and potential for privacy violations. In Canada, Clearview AI faced legal challenges after it was discovered that the company had scraped billions of publicly available images from social media platforms without consent (Office of the Privacy Commissioner, 2021). These practices not only violated privacy laws, but also underscored the necessity of stringent oversight and regulatory measures to safeguard individuals’ privacy rights. The youth demographic, as the most digitally active, is especially vulnerable to these invasive practices. Advocacy groups such as Privacy International in the UK are calling for stronger regulations to address the use of AI in public spaces and predictive policing, aiming to strike a delicate balance between fostering innovation and safeguarding individual privacy (Privacy International, n.d.).

top of page


Conclusion

As we consider the role of AI in society, it is crucial to approach it with a balanced perspective. New technologies often come with inflated expectations and fears, and AI is no exception. While its potential for positive impact is significant, it is equally important to recognize its risks and challenges. History has shown that technological advancements, such as smartphones and personal computers, were initially met with fears of generational destruction. Yet while they undoubtedly brought negative consequences, their overall impact has been less catastrophic than anticipated. In a similar vein, AI, when integrated thoughtfully, has the power to enhance human skills and creativity, rather than replace them. Many universities are now incorporating AI into their curricula, and members of the Youth Council have embraced it as a tool for research and innovation, ensuring it is used responsibly with proper citation. This highlights the potential for AI to augment, not replace, human ingenuity, fostering a future where technology and humanity coexist and thrive together.

In tackling the challenges AI presents, we must not shy away from its complexities. By fostering responsible development and usage, we can create an AI-driven world that empowers individuals, supports ethical research, and contributes positively to global society. The journey to this future begins with informed, engaged, and proactive youth who can lead the way in shaping an equitable, sustainable, and secure AI landscape.

top of page


References

top of page