Apply

Philosophy Club Meeting

Please join us in Auerbach Hall Room 321 at the University of Hartford or online this Wednesday, Feb. 26, from 1 p.m. to 2 p.m., for our next meeting of the University of Hartford Philosophy Club as we read and discuss: Artificial Intelligence and Martin Buber—Why We Need Both.

Join the meeting online here.

When Dartmouth professor John McCarthy first coined the term, Artificial Intelligence was devised as a neutral term to refer a bit more generically to already existing topics like cybernetics, automata theory, and information processing without appearing to be rudely trespassing on any of them. This was all in preparation for the Dartmouth Summer Research Project of 1956 being planned by McCarthy and others, intended to explore the conjecture that “every aspect of learning or any other features of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Since then, human interest in the prospects of Artificial intelligence has been up and down, at various times stoked with more than a little doomsday fear that we might, in the process of our seemingly ever-improving technology, be painting ourselves into a corner of eventual total domination by machines, an imagined predicament which Caltech mathematician John von Neumann (1903-1957) dubbed the “singularity."

This kind of fear goes back centuries. One could say Karl Marx’s ‘singularity” was his predicted collapse of the worldwide capitalist economy caused in part by too much human labor being replaced by automation. Marx’s error here was his assumption that human economy was finite in all respects and so would finally run out of resources on which to form new markets, to rejuvenate old markets, and to sustain the economy through its periodic crises of aging markets with their attending problems of loss of competitiveness and unemployment. In fact, the human economy is infinite in some respects—all those respects generated by the human imagination, and that is what likely has saved it so far from the worst of Marx’s dire warnings.

Unfortunately, this science-fiction-laced aspect of the AI discussion may be distracting us from noticing that we are indeed on the brink of a new breakthrough in the development of computer-assisted learning that we might more suitably call mental prosthesis than artificial intelligence. The latter term suggests a future of machines doing things on “their” own, autonomously, without instrumental relationship to real human thinking and intentionality, and perhaps even in opposition to it. But in fact, what computers have been to us is always what they will continue to be: mental prosthetics. They do as they are programmed; even if they are programmed to self-program, the ultimate cause and agency lies entirely with the original human programmer.

Can people program computers to do bad things? That’s what we really have to worry about. But that is worrying about humans, not computers...


The University of Hartford Philosophy Club has an informal, jovial atmosphere. It is a place where students, professors, and people from the community at large meet as peers. Sometimes presentations are given, followed by discussion. Other times, topics are hashed out by the whole group.

Presenters may be students, professors, or people from the community. Anyone can offer to present a topic. The mode of presentation may be as formal or informal as the presenter chooses.

 

Come and go as you wish. Bring friends. Suggest topics and activities. Take over the club! It belongs to you! Just show up! - Brian Skelly bskelly@hartford.edu 413-273-2273