AI & The Dilemma Of Technology

AI & The Dilemma Of Technology

Here, in two sentences, is the Dilemma of Technology:

Fritz Haber earned the Nobel Prize for Chemistry in 1918 for his invention of the Haber-Bosch process which synthesized ammonia from hydrogen and nitrogen gases. His discovery fed millions through its application to fertilizer, and killed millions more through its application to manufacturing explosives. 

Though the final count is of course not yet available, no scientifically minded accountant has yet tallied the columns and calculated which side is ahead in the race. My guess is the Lives Saved column outstrips the Lives Taken by a country mile. This because, even though those milling about in that century did their level best to kill as many people as possible, global population increase over the Twentieth Century tracked increases in food production, and both were enormous.

The Dilemma is easy to state: you get both good and bad with every new invention or innovation, depending on the uses to which the invention is put. The Dilemma speaks of us, not the inventions. The stick is not culpable for crushing an enemy’s skull; its wielder is.

The late Australian philosopher David Stove opened his essay “Why you should be a conservative” showing the Dilemma is ever-present:

A primitive society is being devastated by a disease, so you bring modern medicine to bear, and wipe out the disease, only to find that by doing so you have brought on a population explosion. You introduce contraception to control population, and find that you have dismantled a whole culture. At home you legislate to relieve the distress of unmarried mothers, and find you have given a cash incentive to the production of illegitimate children. You guarantee a minimum wage, and find that you have extinguished, not only specific industries, but industry itself as a personal trait. You enable everyone to travel, and one result is, that there is nowhere left worth travelling to. And so on.

You invent superior methods of surgery to cure awful ills, but you discover it allows people to indulge in the fantasy of “sex change” operations.

Stove shows us the Dilemma applies not just to technology, but to “innovations” and “Change we can believe in” of any kind. The moral is that caution in adopting new things is always to be recommended, but the historical lesson, which all know but all forget, is that it almost never is. Hence the perpetual mad race of one “solution” being proposed to fix all the problems introduced by previous “solutions”.

If the introduction of new technological solutions worries you, you can try making certain lines of research illegal. Like gain-of-lethality investigations. (Researchers in that proscribed field prefer the euphemism “gain-of-function”.) Anyway, murder is illegal, too, yet murder is still with us. So it shouldn’t be strange that forbidding scientists to monkey with viruses doesn’t stop them.

The current Dilemma surrounds the fears, and not a little hype, over so-called Artificial Intelligence. Should you fear it?

Not if you believe computers will come alive, gain sentience, become conscious, take over the world, make us their slaves, and that line of thing. These scenarios are not only unlikely, they are impossible. Computers are mere machines, and AI is nothing but glorified statistical models. They may be good to excellent models, at times, but they are models nonetheless, and all models only say what they are told to say. Which means it’s better to know and have some influence over who writes the models; or, in other words, better to have coders on our shores than off shores.

Don’t worry, then, about these models being omniscient or becoming godlike and having all the answers. They only have the answers supplied to them. What you do have be frightened of are those who think the computers are infallible, or that have somehow ascended beyond. Offloading decisions to AI is the same as asking programmers what is best in life. The glamour of computers and the excellence AI will attain at mimicry will cause many to forget this.

Will these models take your job? Maybe. Or even likely, especially if you’re a cubicle dweller cycling emails. Or making cartoons or illustrations. Or answering phones and directing calls. But not if you’re hauling or installing plumbing supplies, knocking studs together, cutting open a chest and installing a stent, or even being a politician.

You might survive if you’re a writer. What passes for “news” can be programmed, and even is already. The real danger are to talking heads, who can be replaced with invented computer personalities. Some “creatives” in these vaguely disreputable jobs will survive. Even if you cheat and pass off AI for your own work, the paycheck for appearances sake will still come to you.

On the other hand, back in the 1980s we were warned that secretaries were soon to be made redundant because word processors would steal their livelihoods. Secretaries don’t take diction as often now as then, and aren’t as responsible for correspondence, but other uses were found for them.

In a speech at the American Dynamism Summit, J.D. Vance recalled the predictions of how bank tellers would now longer be needed once ATMs took root. Didn’t happen.

Assembly line workers took a hit once robots became commonplace. But also because of the grand ideas behind free trade, which led to manufacturing largely moving offshore. Ideas about open borders, financially and literally, are changing, though, so perhaps factories will return. Grueling hand-work might no longer be required, with robots assigned to repetitive tasks, guided by software (a.k.a AI). But people will be found to be indispensable for the process, though the old jobs won’t look the same as new ones.

The interactions between man and machine, man and nature, and, even more so, between man and man are so hideously complex that venturing any prediction about the precise shape of our coming (greater) software-controlled economy, about who will lose their job and who gain a new one, requires more recklessness than I possess.

I can venture that change, like always, is coming, and there’s not much you can do to stop it and the best you can do is learn to live with it.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.


Discover more from William M. Briggs

Subscribe to get the latest posts sent to your email.

8 Comments

  1. brad tittle

    Grok doesn’t necessarily give back the answer it has been programmed to give back. To some extent it gives back the answer you want to hear. When I attempt to use it to solve problems, it hasn’t really helped. Sometimes sort of.

    It summarizes really well. But I have to be very careful with the summaries.

  2. Paul Fischer

    Haber nearly blew himself to death when he found out how unstable that stuff is. Rumor has it he spilled some on his lab table and a nearby ink blotter soaked up the liquid. He thought wonder how unstable this stuff is so he hit it with a hammer. Boom!

  3. Johnno

    BRIGGS, YOU FOOL!

    If you don’t hurry to be FIRST with your cockamamie futuristic technology… YOU WON’T GET ALL OF THE MONEY!!!

    The other guy will patent it first!

    Then NO-ONE can use it! Not without a price! Not without risk to our world hegemonic power!!!

    Our A.I. like our missiles and our copyrighted assembly line diseases must be smarter, Faster, LETHAL-ier than anything those pesky Rus and Chinks can create!

    Our flags more inclusive and wavier! Our degrees much more crediblier and easier to obtain! Our voting more accessible and democratier! Our Children more body positive and trans! Our Climate more stablized and less changier!

    And do you know what would help, Briggs?!

    DATA!!!

    HUGE, UNPRECEDENTED, COPIOUS AMOUNTS OF FRIVOLOUS DATA GATHERED THAN IS EVER POSSIBLE FOR ANY HUMAN TO SORT THROUGH OR EVEN PREDICT WHAT USE IT COULD POSSIBLY HAVE… UNTIL NOW! AT THE DAWN OF D.I.E.A.I.!

    You can’t afford to delay progress! Or else you will never afford anything at all! And we need all the respect and prestige we can purchase through our stock value!

    This is what Capitalism is all about!

    This is America!

  4. Rudolph Harrier

    What AI has revealed more than anything else is that most people don’t want to think. Administrators like AI decisions not because they are good, but because the administrator does not have to come up with the decision himself and if things go wrong then he be able to blame the AI model (and its creators will blame the training data, etc.) Back at the end of the 70’s IBM had the statement “A computer can never be held responsible. Therefore a computer must never make a management decision.” But the reality is that people want computers to make management decisions precisely BECAUSE they cannot be held responsible.

    Most academics I talk to are convinced that general AI is just a few years away, and at that time scientific research will consist of turning a computer on and having it spit out the results with little to no human ingenuity. But if you talk with them at length about this it becomes clear that this is just wish-casting: they really like having papers with their names but they don’t like doing the work of experimenting and reasoning, so they want a computer to do everything for them. The big tell here is that they don’t go in for specialized computer assistants which have more promise, they are always in the camp of “once we put in enough training data for the next round of LLMs, then computers will just know all possible science.”

    And of course it’s now being used near universally by students for all homework. I have to cut students some slack though, since they are being inundated by advertisements saying that this is part of the “reasoning process”, and because they get assigned so much busywork by instructors that they probably don’t think of homework as something you think through in the first place. This will have a negative effect if we don’t make adjustments, since students definitely will not learn the basics in any subject if they go about things this way, and therefore will not have the capability of learning anything more advanced either. The solution is to have more direct in-person and one-on-one assessments of students. The trouble is that such things are not going to be easily feasible for the large class sizes we currently have, and will expose the fact that a large chunk of students can’t learn in any way beyond rote memorization. Now it’s always been a bad idea to try to teach so many students without individualized feedback, but before now we’ve been able to pretend like we’re doing something. Now the choice will be to change things up, or to have AI slop be used to fill in every assignment. The sad reality is that schools are going to choose the latter. We know this, because they made the same choice when it came to pocket calculators and smart phones. The bad thing about this choice is that it will screw over the B level student who can learn well when given the right environment to learn, since “have an AI fill out this standardized homework assignment” is not the right environment to learn.

  5. shawn marshall

    Demons can effect physical objects.
    Demons can utilize cell phones and computers.
    Demons can alter what you think you see.
    To abandon critical thinking in favor of machine generated output is a denial of mind.
    Mind is a terrible thing to waste.
    Why do so many succumb to the demonic?
    Because it is so easy….in the beginning.

  6. @brad tittle:
    > It summarizes really well. But I have to be very careful with the summaries.

    Isn’t this a contradiction?

    @shawn marshall:
    > Demons can utilize cell phones and computers.
    > Demons can alter what you think you see.

    Why invoke demonic when the earthly is perfectly adequate? Social media companies realized there’s a lot of money to be had in manipulating people, so they do that. No demons required.

  7. IMHO the dot ai bubble, as direct follower of the dot com bubble, is trying to instantiate Gödel’s incompleteness theorems, with linguistic terms as axioms, theorems with tokens/weights and prompts, and output with validatable truth … this may look nice compared to previously existing [learned-from] texts, but cannot escape the bubble.

  8. Cloudbuster

    “Anyway, murder is illegal, too, yet murder is still with us. So it shouldn’t be strange that forbidding scientists to monkey with viruses doesn’t stop them.”

    Good scientists tend to be smart and have low time preference. Execute a few gain-of-lethality researchers and the field will quickly become far less attractive to anyone qualified to engage in it.

    That could mean that then you will have only comparatively dumb people engaged in gain-of-lethality research and dumb people do dumb things, often with catastrophic results, but I’m pretty sure we’re already partway there, anyway.

Leave a Reply

Your email address will not be published. Required fields are marked *