When Computers Looking At Their Data

Dejan Georgiev
3 min readOct 24, 2020
Photo by Markus Spiske on Unsplash

Everywhere you go, you generate a cloud of data, you are trailing data with everything that you do — you are producing data. And THEN, there are computers looking at that data, they are learning, and these computers essentially try to serve you better. They try to personalize things to you, they are trying to adapt the world to you. So, on the one hand this is great because you will get this personalized experience. BUT there is also a danger because the entities and the companies that are in control of those algorithms don’t necessarily have the same goals as you. And this is where i think that the people need to be aware of.

We thought that we were searching Google, we had no idea that Google was searching us.

You know, we came into this digital world thinking that we were users of social media, but actually the social media was using us. We thought that we were searching Google, we had no idea that Google was searching us.

Photo by Charles Deluvio on Unsplash

So famously, industrial capitalism claimed nature. Rivers and mountains and forests and so forth, for the market dynamic to be re-born as a real estate, as land, that could be sold and purchased. Industrial capitalism claimed work for the market dynamic to be re-born as labor that could be sold and purchased. NOW, here come surveillance capitalism following this pattern, but with a dark startling twist. What surveillance capitalism claims is private human experience. Private human experience is claimed as a free source of raw material fabricated into predictions of human behaviour. And it turns out that there are a lot of businesses that really want to know what we will do NOW, SOON and LATER.

And it is how these companies make their money, how their algorithms reach deeper and deeper into our daily lives and our democracy that makes many people increasingly uncomfortable.

We have to recognize that we gave technology a place in our lives, that it had not earned.

Essentially, because technology made things better in 60s, 70s, 80s, 90s, people developed a sense of inevitability that will always make things better, we developed a trust, that it had not earned.

In the Age of AI — Documentary

Can we trust or fear AI?

Trust is the foundation of any successful relationship and of vital importance when it comes to AI. In our laws, we have over 17000 written pages of law and orders only because trust between relationships has been so many times violated over the time. That’s why nowadays companies sign up long contracts where it should be only a handshake. We know that AI can be used for looping through data and detecting a diagnosis in a seconds, which is very good, but also it can be used for threatening people and violating privacy, as it has already done at some stages, which is very bad. This is precisely from what a lot of people are afraid of, so AI demands some kind of ethics to be involved in. The BIG question here is where does the AI developers or the AI inventors gets their ethics from? So this could us give a glimpse of our view to AI.

I highly recommend you to read the new book by John Lennox, called “2084: Artificial Intelligence and the Future of Humanity” or give a hear to his talk at RZIM 👇

John Lennox speaking at RZIM

--

--

Dejan Georgiev

Christian. Husband. Dad. Programmer. Lover of coding, reading & photography.