Parents prepare their children for possible stranger danger in the park. They warn their teens about the dangers they may experience while behind the wheel. We give them devices, but have we prepared them for what we don’t even know they may experience? One thing we can’t predict is the impact of Artificial Intelligence Apps on our children. Elon Musk warns us that the use of artificial intelligence possesses challenges we can’t predict. He recommended a slow roll out, but other’s did not agree. What’s resulted is unpredicted challenges for which protective measures by developers are only put in place after a problem is discovered. So how can we protect our children for the unpredictable challenges that lie beyond their screens?
The answer to protection from what we can’t predict lies in users learning how to question everything internet generated. We need to teach them to be somewhat skeptical. Along with learning how to question, the place to start preparing our users is through understanding how AI is created and its flaws. They also need to know how neuroscientists are used by game creators to guarantee the game sustains users attention, and how to manage their own use.
AI is only as accurate as the individual who input the information. It only collects the information it finds on the Internet. Ai doesn’t know whether the information is accurate. Essays created by AI apps need to be researched to determine if the AI generated essay is biased or incorrect.
In the 1980’s, parents weren’t prepared for the side affects of Video Games. When Nintendo was introduced, game’s creators wanted to grab attention and retain it. No one was prepared for the addictiveness of games that would lead to their freshman children flunking out of their first year of college. They didn’t know they needed to train their children how to control the game instead of the game controlling them. Technology has advanced exponentially further than when technology was was plugged in at the wall, so we will need to be vigilant about future advances in technology that can negatively impact our children.
A new threat is how vulnerable children are easily manipulated by AI generated friends without question. Discussions with our children are necessary about what is going on in their games. I used to watch television with my children and discuss what they were watching. I suggest every parent play along with the child to see what the avatars are doing in the game. Then discuss it with them. The first thing I saw in an educational game one of my students was playing was an avatar made friends with the player and immediately showed her where she could buy clothes that didn’t make her look like a Neub. The avatar informed her that the free clothes weren’t cool.
AI can create Fake news so effectively that users can’t even tell what is real or fabricated. The word for incorrect AI generated answers is hallucinations. Students aren’t trained to identify hallucinations, so we need to teach them how to question what they are reading and seeing on the Internet. It’s important to model how to question what is seen in the news. It is difficult to tell if AI generated images are real or not. My on-line student and I were researching fish that are found in her lake. As she was viewing the hundreds of images, she questioned one as being fake. Since she questioned the image, we researched more fish like the one she questioned in the image. She was correct, the fake image looked real, but when compared with others it was clearly a fake image.
AI apps are attractive to students because it is easy to use and produces fast results. It’s our job as parents to discuss the negatives surrounding the use of these apps. Students should question the accuracy of the reports, essays, homework answers, speeches, and admissions essays generated by AI apps. Our students trust everything generated by the apps and aren’t aware that it could be incorrect. We need to encourage all users to critically view and evaluate everything AI generates. A discussion about the ethics of turning in AI generated work needs to occur. Not only is it cheating, the generated information can very likely be incorrect.
Addressing the ethical use of Ai is important. Allowing Ai to do their work is not only cheating to get a good grade, it is cheating them from using their brains to critically think and create. These are muscles that need exercise or we those the ability to use them.
We might want to take the road of those working in Cupertino in 1998. I was hired to present a seminar for parents entitled: Homework Solutions For Weary Parents-I shared strategies that shifted the role of the parent to that of a strategy coach so their children could become self-advocates and end nightly homework challenges. I touched briefly on how important it was for students to learn how to manage their own devices instead of managing it for them. After my session, one parent pulled me aside and said, ”Your gave me lots to think about and change, but you don’t know your demographic. None of us have devices in our homes. Most of us are involved in the development of the devices you spoke about, so none of us have them in our homes.” In an interview with Diane Sawyer years ago, Bill Gates stated that he waited to give his children phones until they were in high school and wished he had waited longer.
We could do the same thing, but it’s not in the best interest of our children. They do need to be prepared to use their devices mindfully with warnings about the dangers that lurk behind their screens.