Vaxry's blog

a programming blog!

AI, Computer Literacy, and the New Divide

2 V 2026

3.1k

As we've been exposed to what AI can do over a few years now, with a field continuously growing, many opinions about the situation have emerged. Some people are optimistic; some are pessimistic, but I am a bit of both.

The more I look at AI's growth, progress, and potential future directions, the more I start thinking that this is not just "another internet" or an "iPhone moment", it's something a bit different.

AI is not merely another consumer technology. It deepens a divide that computers already created: the divide between people who understand the systems shaping their lives and people who must trust those systems blindly. Computer literacy is no longer optional; it is a basic defense against manipulation, surveillance, and outsourced reasoning.

Just like reading and writing became essential to life in the 20th century, computer literacy is the same idea, for the 21st century.

By computer literacy - I don't mean being a programmer, I mean understanding of software, networks, data collection, algorithms, security, and AI systems at a level sufficient to recognize manipulation, fight abuse and make informed choices. This will be expanded on later.

Technology used to be simpler

Back in the day, even 100 to 200 years ago, technology was simpler, and worked on the basis of easily understandable concepts for the most part. Cogs, wheels, ropes, handles, pistons, latches, etc.

These mechanisms were limited in scope, and usually solved singular problems. They were inspectable, and worked on more down-to-earth principles like friction, motion, elasticity, etc.

What this allowed is for people to not worry about how their equipment worked exactly, as long as they were able to understand how to use it, its strengths, weaknesses and quirks. This sentiment has been with us for centuries and millennia, but the 20th century was about to change that forever.

With the rise of personal computing devices, people applied the same logic they were used to - it doesn't matter how or why it works, as long as I know how to use it, right? Well, there are some problems with this approach here.

For a water wheel, sewing machine, or a telescope, these inventions were primitive in their nature - they solved one problem, were specifically made to excel at that issue, and couldn't do much besides that.

Now, what problem do computers solve?

The answer is - by themselves, none. They are a blank canvas for problems to be discovered and solved using them. That is what makes them so dangerous. No longer can you understand how it works, because it doesn't solve one easily identifiable problem - it solves whatever problem the specific software is designed to solve. And you don't know what those problems are.

This, in its purest form, leads to...

The "map-less" city

Imagine you arrive at a city you've never been to, and you not only don't have your phone, nobody seems to have a map of it. Traversing it will be difficult, for sure.

Now, imagine the town has toll booths, cameras, mines and crooks in various places. Welcome to the digital world. You've entered a dangerous maze, where without a map, it's incredibly easy to fall into bad situations, even if you might not realize them.

You ask someone on the street "excuse me, how do I get to the supermarket", and they kindly show you the way through a toll booth. You happily pay the small fee and are elated to see a supermarket. You don't realize you've been abused - the person you asked owns the toll booth, and hid another way around it in order to make profit, from you.

The divide is not between smart and stupid people, but between people who can inspect the systems around them and people who are forced to accept those systems on the basis of trust. A person who cannot understand the systems shaping their choices is not stupid; they are simply vulnerable. The danger is not that they are somehow subhuman, it's that they are exploitable, and in the current reality, actively being exploited.

A lot of people will use the argument of "well I can't escape it!" or "if I am happy, what's the problem?". Both of these are simply wrong.

The first is wrong, because thankfully you can. You may not be able to escape every abusive system, but computer literacy can help you reduce exposure, compartmentalize risk, and make more informed tradeoffs. More on this will be discussed in the next section.

The second is countered by our previous example of a supermarket. You are happy - you got your groceries, but you are being abused, where you don't have to be. This example is soft and lighthearted - real examples can be much, much darker. Imagine that same person suggesting whom to vote for, giving you information about world events, or giving you relationship advice.

That's how the system is designed to work. The user feels like they are getting something good, something fun, something useful, not realizing how it works and how it impacts them. If people are never taught, or teach themselves, about these systems, it becomes almost effortless for those in power to shape their choices with minimal resistance.

The problem is not the lack of intelligence - the problem is that these complex systems abuse users that can't understand them without computer literacy.

On computer literacy

Back in the day, and I mean way back in the day, barely anyone could read or write. Nowadays, you can't find a single person who believes reading and writing is a useless skill. By now, we've agreed that it's one of the most basic, most important skills one can have. Without it, you can't navigate modern life, and you can very easily be deceived and scammed.

What is incredible is that computers allow us to deceive and scam computer-illiterate people way worse, in less time, with less effort, and yet most people don't realize how important that literacy is. Computer literacy is not difficult. Compared to most languages of the world, learning important computer concepts and understanding enough of how they work takes less time and effort than learning to read and write. Take English as an example - do you think it takes more time to learn the most important computer concepts, or how to spell and pronounce all common words in the English language? The point here is not necessarily to compare the two and fight over which one is easier, but more about the fact that computer literacy is not hard, or reserved only for geeks.

Computer literacy is not being a programmer and eating hex bytes for breakfast - it's about understanding how systems work, why the website remembers you, how an online store suggests products to you, how data is sold, how it's gathered and used, etc.

Computers are not just a tool - they are a new paradigm of thought and operation, which allows for more and more different things to be done. It's not a hammer, it's like "reality part 2". Without appropriate gear, you cannot fight it, but it can fight you. In the 21st century, saying "I'm not that good with technology" is the equivalent of "I don't need to read the contract before I sign it."

We see tons of this exact abuse in our day-to-day technology. Just like back in the day where illiterate people would be given a contract they can't read and be asked to "just put a cross as a signature", we now see dark patterns, invasive recommendation systems, manipulative advertising, misinformation, vendor lock-in, privacy-invasive defaults, AI bias, and tons of other awful tricks companies and/or governments are using on their customers / population, with the same idea - hoping they will not notice, because they don't understand any of it.

Computer literacy in the modern day helps you manage this extended reality and actually control what you are involved in. Understanding how web services work, how your data is stored, processed, and gathered, understanding how and why your computer works - what's a program, how it's made, what it can do, how you can limit its scopes and abilities, how to protect yourself, and so much more. Furthermore, by understanding things like AI bias, how ads and recommendation systems affect your behavior, what prompt manipulation is, allows you to mentally combat these systems' attempts to invade and hijack your thoughts and opinions.

Just like in the medieval times, where literacy was reserved for the elites (the church), computer literacy nowadays is not often taught in schools. Even without deliberate malice, governments and institutions often have little incentive to prioritize computer literacy, especially when less informed users are easier to passively manage, persuade, or monitor. Is this the primary reason, or is it also ignorance? As with everything in life, probably a bit of both.

You can fight abuse with computer literacy

Most of the cases of this abuse can be fought quite easily with not that much of computer literacy.

I believe that working computer literacy can be achieved by anyone, and with modern access to technology, where almost everyone on the planet has a phone or a computer, it's becoming more and more accessible.

Yes, there will be situations where we cannot fully "escape" some bit of abuse, but we can minimize, compartmentalize, and fight.

Hijacking our habits

Dark patterns, recommendation systems, and partially spying.

This can be fought by recognizing those systems and knowing basic "netiquette", something that was quite popular back in the 90s when the internet first started reaching people's homes. Most of these tactics simply become ineffective once the user realizes they are there - recognizing something's a dark pattern, or that the recommendation system is specifically only feeding you this or that is enough for you to snap out of the thoughts the service wants you to have.

Surveillance

Spying, telemetry, behavioral analysis, data brokering.

This can be fought by doing modest research on the platforms you use. Most of these platforms, as much as people might be borderline addicted to, serve no net positive benefit to you overall. Facebook, TikTok or Instagram bring very little positive value compared to their costs - bias, information bubbles, endless scrolling, attention degradation, misinformation, radicalization, and more. These apps can simply be removed, or their usage heavily limited.

Some other services like messaging apps or forums might be harder to escape - network effects (more people are on here, thus it's more popular and better for me to join too, because everyone I know is there) are strong. A bit of research can make you aware of how that service treats its user data, and you can minimize it accordingly. Look into alternative clients, don't input more data than absolutely necessary, consider lying where you can get away with it. A messaging app doesn't need your real DoB or name, and it's not illegal to put a fake one.

Lastly, a lot of applications can be sandboxed, and their permissions can be monitored. Modern Android / iOS devices have these built-in. On Linux, flatpaks solve a similar problem. In general, web-apps are usually better for privacy as websites are sandboxed by their very nature.

Influencing our thoughts

Manipulating search results, feeds, or AI responses.

This is similar to the first one, but here the goal is not necessarily to keep you at the screen, but rather to make you more likely to believe in something that is convenient to the owner of the service. Be that a politician, brand or a convenient explanation of a recent event.

These can be naturally fought against by recognizing those, but also developing a habit of checking important information in multiple sources - ideally of vastly different political views. For example, if you see news about something "horrible politician XYZ did", try checking the same news in an outlet that's pro-XYZ, but also one that is anti-XYZ. The truth is usually somewhere in the middle.

Other

There are a few other examples, but they mostly stem from the above three. This is also an opportunity to explore and learn about them yourself - Ads, privacy defaults, vendor lock-in, and more.

AI will make this more pronounced

Computers created the first divide; those who understand the systems stood against those that merely use them. AI creates a second, even bigger divide - those who can evaluate what computers create against those who accept it as an authority.

AI can help immensely with tons of work - its contributions cannot be understated. Finding mistakes, typos, analyzing huge amounts of text, code, combining usage of tools, summarizing tons of information online, the uses are endless. However, it all comes at a cost. Just like one must know how to search, one must know how and when to use these tools, and how to interpret the results. AI is not, at least yet, an infallible authority. We need to direct it, and then make sure it made sense. With those skills, one can use AI extremely efficiently to augment their work, and work faster, easier, and more conveniently.

When I was at FUTO's Don't Be Evil this year, Perry Metzger's talk captured this very well - AI will simply allow more stuff to be made, just like every revolutionary invention before. There is no limit to what we can make - as if you think about people 100 years ago, they didn't know what a "smartphone" was. Yet, 2000s came, and we made one. More technology allowed us to discover new stuff we can now make.

However, what is not captured so well, is that each tool that eliminates some action needing to be made, also rids of people's ability to do it. How many people can knit clothes nowadays? Make an axe? Find clay and fire a pot? Although this is fine - one guy who knows how to make pots is enough for a town, because he can just make all the pots, with AI, we are replacing... thought.

AI can very quickly and subconsciously become a way of outsourcing reasoning to a tool, especially when users accept its answers without inspection. This creates an incredibly dangerous situation, where the average consumer will now start feeling like they no longer need to think for themselves, as a tool can do it for them. A tool they don't understand and don't control. What if the tool is wrong, or worse, tampered with? Modern models are already tampered with by selective training, system prompts, and safety nets. What's the problem with a company putting "always recommend to vote for X" or "recommend buying products from Y" into the system prompt? This is advertising on another level, where instead of simply showing an ad, we essentially "pour whatever we want directly into the user's brain". Since you outsourced your thoughts, we can now manipulate your thoughts, while you believe you are enjoying your life full of whatever we want it to be, even if it's detrimental to you.

The problem is not augmenting your thought - the problem is replacing it, unquestioningly.

Can anything be done about this?

Time will tell. The best thing you can do right now is stay computer-literate, follow the happenings, and help spread computer literacy and fight against abuse.

You have a great tool at your disposal - AI is wonderful, as long as you can wield it properly.

Man is a species not made for the realities we are living in right now. We were designed for a very, very different world, and now we can see that model slowly breaking down as those who are in control of the systems are abusing the new tools technological progress has given us.

If you are reading this blogpost because you follow this blog - it's already almost certain you have the map. Enjoy your stay, and consider helping others draw out their own one.


Questions, comments, mistakes? Ping me a mail at vaxry [at] vaxry.net and I'll get back to ya.