Let’s take a closer look at how it works.

It’s not often that a new technology captivates the attention of the world in the way that artificial intelligence (AI) has this past year. ChatGPT in particular has drawn both passionate praise and ire from all corners of society, often drawn from a place of deep conviction.

But what are these platforms? How do they work? Are they safe? Should we use them? These are some of the pressing questions to consider as we encounter a changing technological landscape.

One question weighs particularly heavy on many homeschool parents, especially given the growing complexity of the conversation: How will AI affect my child’s education?

The technology is still evolving, and we don’t have all of the answers. However, we want to take a look at some of the basics of AI in hopes of helping you consider these questions.

What is AI?

AI is an umbrella term covering any computer program that performs complex tasks that require human intelligence. Google Search uses a form of AI to filter results and present those most likely to answer the user’s query. Other examples include programs that interpret speech, play games, or identify patterns.

ChatGPT—the focus of recent conversation and of this article—falls more specifically under the category of generative AI, along with Google’s Bard and image creation platforms like Midjourney or DALL-E. These are some of the first AI platforms that allow people to knowingly and directly interact with AI programs.

How do they work?

AI generators such as ChatGPT (and indeed most AI models) are trained largely through a process called machine learning. This is a complex mathematical algorithm where the program is fed information, analyzes it, then uses that information to get better at a particular task. The program uses existing examples and data to predict likely future outcomes.

In the case of ChatGPT, the program has been fed vast swaths of text in order to analyze how people write. ChatGPT then uses that information to mathematically predict the most likely next word in each sentence. Most modern cell phones have a similar predictive text feature, albeit less robust and much smaller in scale.

For example, when asked to explain homeschooling in three sentences, ChatGPT wrote this:

“Homeschooling is an educational approach where parents or guardians take on the responsibility of teaching their children at home, typically outside of traditional school settings. It allows for a personalized curriculum tailored to a child’s learning style and pace. Homeschooling can offer flexibility but requires commitment and planning from the educators.”

If this prompt was given again, the answer would contain similar information, but vary slightly in its construction.

How does it know what to write?

Because AI is an enormous mathematical model, it treats prompts like an equation that it is looking to solve. ChatGPT gathers information by “remembering” common words and phrases that are often connected with keywords from the prompt, then arranges these words and phrases in the order that is most likely to “solve” the equation.

Critical Response

The response to generative AI has been divisive at best, particularly when it comes to education. Due to the relative infancy of the technology, many concerns and unanswered questions remain.

Proponents see it as a way to automate mundane tasks, aid in brainstorming or idea generation, and summarize information. Some cite it as a useful tool for writers, others as a way for non-writers to communicate.

Critics of the technology call it a solution in search of a problem and raise concerns about plagiarism, accuracy, ethics, and privacy. Several major lawsuits have been filed against AI companies, including one recently filed by The New York Times, claiming copyright infringement.

This conversation is particularly relevant to education. Some educators, parents, and researchers see the use of generative AI in education as a necessity, lest children be left in the lurch as technology advances around them. Others are concerned that reliance on generative AI will hinder children’s ability to think critically. Still others would prefer to teach their children to use the technology responsibly rather than have them find it on their own.

Home educators in particular would do well to consider these matters, as they direct their child’s education. Because of the freedom and flexibility homeschooling offers, parents can pivot quickly in response to changing technology, unlike directors of traditional school systems. What is more, they can do so in a way that is suited to the particular needs of their child.

We cannot fully address the subject of AI in this brief article, but we hope this information is useful as you make decisions about technology in your homeschool. To that end, we hosted a pair of webinars on the topic that can be found here and here.

Additionally, we recently surveyed our followers on Facebook regarding AI, and received a wide variety of responses, which will be discussed in an upcoming article.

We will continue to monitor developments in the technology as it relates to homeschooling.

Michael Tobin is a husband, father, writer and homeschool graduate originally from Memphis, Tennessee. He writes and edits content for HSLDA with a focus on research. He is currently working on a master’s degree in Playwriting.