In light of the anti AI campaign brewing around Neocities, a fellow neocitizen asked me what my thoughts were regarding the topic. Some people like it, some people hate it and some of us are sitting on the fence but her question reminded me of an initiative I ran across that finds the middle ground between man and machine. The site is called Brain Made and it came as a pleasant surprise. What we learn is that how you’re using AI becomes more important than the fact you’re using it.

This author makes a compelling case for using nonhuman cognition for creating a rhyming dictionary, which seems ok vs letting it write your poems, which probably isn’t ok. Using it to populate visual references of birds seems ok vs letting it paint large sections of your artwork, which probably isn’t ok. Getting 10 happy sounding synonyms for despair seems ok vs asking it to write your anti transcendentalist masterpiece, which probably isn’t ok but hopefully we’re now able to make some distinction.

There’s a big difference between using AI as a teaching aid vs trying to replace the human enterprise. It’s meant to help us keep the lines of fruitful inquire open and this is why sentient creatures brainstorm with each other in the classroom, for example. We don’t call this unethical. We simply consider it part of classroom requirements. Brainstorming with robots may be gaining slow traction; although it presents a portrayal that makes automation seem less intimidating and more like a familiar companion.

The impact of digital consciousness has broad sweeping implications ie ethical, legal and moral etc. Many challenges probably won’t even make themselves known, until we’re further along but for the ones we have now; they don’t go away because we’re currently in session. If anything, they just become more immediate. A few questions that come to mind include who owns AI, whose liable for the advice given by AI and how will AI shape the trajectory of learning.

On one one hand, we’re becoming less thoughtful because we can just google information. There’s no automatic brain juice or methodology required to extract, ascertain or obtain the right answer. We can qualify that by saying it depends on what we’re talking about. Depends on the person. Depends on the frequency of use right? On the other hand, we’re able to reduce time spent on tedious tasks like putting together MLA, APA and Chicago style citations, which doesn’t necessarily threaten academic integrity.

How AI shapes our world depends on the people it touches; so there’s a lot to be said for those of us who are doing the handling. This especially includes tech companies, researchers and founders who set operational standards- expectations for how code driven intelligence will be launched and regulated worldwide. It has been said that even a tool becomes a weapon, if you hold it the right way but we all play a part in AI’s deployment, which includes more than just businesses and engineers.

We have a collective responsibility to shape these mini versions of ourselves that we’ve coaxed into existence. Meredith Broussard’s book “Artificial Unintelligence” argues that technology isn’t non partisan; it chooses the side of its creators. Humanness is codified in the expression of machine agency. What this means is that machine agency encodes humanness, reflecting our values and inheriting our limits. (Broussard). As a result, we owe infrastructure conscientious stewardship.

We’ve learned that there are more proactive ways of using neural networks that won’t necessarily be damaging to either of the entities involved; ways that won’t impede critical thinking or cause a disruption in AI health protocol. In an ideal world, thinking machines wouldn't substitute for bonafide ingenuity or overcompensate for ineptitude. Artificial minds wouldn’t surpass or fall behind knowledge acquisition but work, develop and produce parallel with it but this just isn’t our current reality.

Engineered intelligence is a fairly sensitive topic; not only for the academic community but also for working class communities who find it hard to be objective about the fact that their economy is being eroded by the loss of jobs it claims. Research from Goldman Sachs estimates that about 7% of US jobs could be displaced by AI over time (Goldman Sachs). Any loss of employment can be devastating making it harder to afford basic needs like housing, food, healthcare and education.

Machines are now able to do jobs that were once done by working humans who are becoming less central in the contemporary world of automation. This shift humanizes AI, taking the conversation from from novelty to collateral damage. To take this a step further; algorithmic systems have become a mechanism of governance that perpetuates repression by supporting the power structure; making advancement less about economics and more about politics, identity and control.

Loss of social cohesion is not an opinion; it’s a fact that many of us are wrestling with but there’s an author who sheds some light, on how this design is enabled. Ruha Benjamin’s book “Race After Technology” argues that exclusionary systems don’t always pull up a chair to our dinner table; they’re often disguised, as bringing about efficacy and this is what makes them run. (Benjamin). They’re able to plant themselves in our metaphorical gardens of chaos and order; very matter of fact like.

It’s this type of obscurity that makes the entire process feel rather sinister. Consequently, it’s difficult for a person to fully reckon with technology because of it’s complexities. What we can come to terms with is the fact that things are changing rapidly, without our permission. Despite the promise of AI, we receive unwanted side effects such as less intellectual engagement and the loss of jobs. It becomes apparent that it’s difficult to have progress without some kind of fallout or aftershock.

Regardless of how we feel about it; significant developments are under way from AI driven policy, which collects large data sets of social media trends to predictive policing, which aims to pinpoint those who are more likely to commit crimes but we’re not quite as helpless. By understanding what allows intelligent retrieval to thrive; we can do our part to prevent it from functioning in a way that harms people, society or itself. This is a step in the right direction for those of us looking for ways to slow down the adoption of cultural norms.

AI represents too many different things that are too difficult to reconcile; things like the loss of livelihood vs faster, more personalized results online. There are things that outrage us vs things that leave us in awe. There are things that push our sense of urgency, such as the pressures intensifying development and yet, others that can only unfold over time like what kind of world we’re leaving for the future. It seems we’re on the cusp of something transformative; like the day when everything will either cohere or unravel.

We’re forced to reevaluate what we're willing to tolerate and defend our position in society; while our minds are occupied with questions like how are we going to survive this thing, as a unit. It’s AI’s capacity to surpass human workflows that stands out to most of us, although it isn’t without fallibility. Nevertheless, we probably won’t have to worry about robots doing away with us because we do a fine job of eliminating each other, especially in precarious times like these.