top of page
I S S U E   1 6     S E P T ,    2 0 2 4

That would mean a machine that has “the ability to understand its own internal conditions and traits along with human emotions and thoughts. It would also have its own set of emotions, needs and beliefs.”

The HAL Dilema.  2001 A Space Odyssey (1968  Stanley Kubrick - Arthur C. Clarke

RR: As with most businesses, supply chain is critical in meeting the demands of customers, be it commercial aircraft or private. Can you give us an example of how AI is currently being used to improve and / or predict supply chain with respect to aircraft OEM production and/or interior outfitting?

 

 

JR: Well, we touched on that in the previous question but in simple terms, it’s all about data. Machine learning and predictive AI are all built on specific sets of data. Think of how ChatGPT generates an image just by typing in a simple sentence. Combining natural language processing (NLP), image classification, and Limited Memory AI, it searches the entire internet database and generates a new image from that data set. But for specific industry applications, the data set is smaller and must be built from existing and ever-accumulating data. This sometimes means industry-wide and at other times (say for proprietary or competitive reasons), the data set might be built from within a company. I once did a small project within the private aviation interior space. That space appeared fractured, that is with lots of players, many boutique in character – which in many ways seems to me quite valuable, especially when the client base consists of wealthy private owners. But when it comes to specifying OEM parts for a design, or even ordering fabrics from here or there, a fractured space can make it harder.

 

One of, if not the greatest value of digital magazines like Freshbook is bringing together of all relevant parties. What would be of even greater value would be an ever-growing reservoir of data within your industry sector developed into an AI to facilitate greater efficiency across the supply chain – from specification availability, when, from where, and arrival prediction. But again, it all comes down to data.

 

RR: It seems that a pivotal point in the evolution of AI is the juncture at which machines become “self-aware.” Although that has little to do with aircraft, I still think our readers would love a chance to better understand the implications of that and why respected figures like Elon Musk and Alan Bundy are offering rather gloomy predictions of that eventuality. Can you please give a sense about where the most popular theories currently rest?

 

 

JR: As I mentioned before, we only have Artificial Narrow (or Weak) AI today. But that’s not what the Dartmouth workshop crew in 1956 imagined. They wanted to create a machine with genuine intelligence. That would mean something like General (or Strong) or Super AI. That would mean a machine that has “the ability to understand its own internal conditions and traits along with human emotions and thoughts. It would also have its own set of emotions, needs and beliefs.” In March of 2023 Elon Musk, Apple co-founder Steve Wozniak and Pinterest co-founder Evan Sharp, along with 1,400 other tech leaders and academics signed a letter suggesting a pause in “Giant AI Experiments.” In it they endorsed the Asilomar AI Principles, which says “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” In other words, we are simply not ready for advanced AI – General or Super AI. We don’t know how to manage it nor how to think through the ethics of its use. Stephan Hawking had said something similar. The fear is partly that we don’t yet even know how to manage the current Artificial Narrow (Weak) Generative AI, like ChatGPT. The greatest fear of the signers of the pause letter is an out-of-control General AI and the potential catastrophes that could result. For others the even greater concern is what is called transhumanism.

 

These concerns are without doubt quite legitimate. To me personally, however, these fears cannot possibly be addressed, and the ethics surrounding them, if we don’t go back to the beginning and ask whether the basic assumption of a computational theory of mind is self-justifying. The question is not do brains seem to work analogously to computers. The question is, what are human beings and what is human consciousness? The theory behind Theory of Mind and Self-Aware AI is that human beings are in fact just machines, nothing more – under this theory transhumanism becomes an aspiration, a goal, not something to be feared. The problem is, such a view is not science. It’s a metaphor wrapped in a philosophical proposition without justification. Only by revisiting the philosophy can we hope to establish appropriate ethical boundaries for AI.

Add paragraph text. Click “Edit Text” to update the font, size and more. To change and reuse text themes, go to Site Styles.

James Roseman is business guy turned writer and lecturer following retirement. He enjoyed a career in banking, management and IT consulting, and still does management consulting for small to mid-sized companies

But most of Roseman's time is spent writing, lecturing at universities, teaching at his church, and serving on the board of two non-profits. He has written two books, Rediscovering God’s Grand Story: In a Fragmented World of Pieces & Parts, a non-fiction work and Habits of the Heart, a historical novel drawn from and based on his family.

 

Learn more: www.jmroseman.com.

www.alongthelight.com and the Habits of the Heart website & blog: www.habitsoftheheart.net.

bottom of page