Which acronym do you prefer? Might as well pick one, because they’re growing closer together with each passing day. As an industry, Aircraft Interiors is the game we’re all in but one could certainly say the same about Artificial Intelligence, yet it’s hardly confined to an industry. That particular AI is already deeply embedded into almost every facet of our lives, even if we aren’t aware of it. It’s a big topic and I for one, know very little on the subject beyond how it’s most certainly destined to change the world AND destroy it, depending on who you’re talking to.
But today, we’re talking to Jim Roseman, a scholar and author in Dallas and as it also happens, my brother. While Jim has written books and papers on many subjects, mostly in the academic world, I was especially taken by his recent most dive into Artificial Intelligence. In it he thankfully took the time to dive deeper than most of us care to go on the subject AND inform on it’s three major types (yes there are three!). To say the very least, it’s fascinating and to go further is to realize that it’s literally Buck Rogers on crack kinda’ stuff. As a boy, even the notion of something like the internet was patently insane, beyond imagination. But AI? It might as well be time travel or particle transporters. It literally boggles the mind. But since none of us needs more of that, Roseman breaks it down for us in not so scarry terms we can all understand. And finally, at my insistence, he shows us a little of how AI is transforming and accelerating the other AI, Aircraft Interiors. It's already been heavily impregnated with pivotal AI integrations - and it’s only the beginning!
RR: Thanks for joining me, Jim. We’ve had a few recent discussions on AI but mainly just topical and in my case mostly just regurgitating things I’ve read and seen in news pieces. But when you recently began unfolding your more expansive dive into AI and started spitting out terms like nuero-biophysical processes and hylomorphic theory, I thought, ok this is probably above my head. But before I drifted off, you started talking in terms I could understand. I think that was when you answered in relatively simple terms; What is AI? And since the beginning always seems to be a good place to start, maybe we can start there. Can I ask you to recite that rather simple / elegant answer?
JR: Sure, Rick. Here’s the way I put it in my lecture: AI is a complex system of hardware and software technologies built to compile and analyze extremely large data sets very quickly using complicated algorithms to produce pattern-based predictive and “creative” guess outcomes, or what you might call “insights,” and also actions – in other words, robotic and automated physical machine systems that are designed to mimic human behavior.
RR: As I alluded in my intro, there apparently are THREE types of AI. Can you please give us a breakdown of these and what distinguishes each.
JR: I use IBM’s simple breakdown – others put it slightly differently, but I think most basically agree. There are Artificial Narrow AI (or Weak AI); General (or Strong) AI; and Super AI. And within these three, IBM breaks down four functional types: Reactive; Limited Memory; Theory of Mind; and Self-Aware AI. The only one of the four that we have today is Artificial Narrow AI. The other two are only theoretical right now. Artificial Narrow or Weak AI is designed to perform single or “narrow” tasks, but it can perform them infinitely faster and often better than humans. Think of Google search or when IBM’s Big Blue beat Gary Garry Kasparov (International Chess Grandmaster) in the late 1990s or when Netflix recommends movies based on what you have watched before. That is Reactive AI. Examples of Limited Memory AI are Generative AI tools such as ChatGPT, Microsoft’s CoPilot, Google’s Gemini, and Apple’s recently announced ‘Apple Intelligence’ which uses Open AI’s ChatGPT - where AI predicts the next word, phrase, or visual element drawing on the whole vast internet, and “generating” an outcome. Virtual assistants like Siri, Alexa, Google Assistant, and self-driving cars are also examples of Limited Memory AI. Reactive and Limited Memory are the only types of AI that exist today. There’s no General (Strong) or Super AI currently, so there is also no Theory of Mind or Self-Aware AI today – these are the stuff of dystopian novels and movies.
RR: During one of our conversations, you mentioned that in the 1940s, figures like Alan Turing (WW2 codebreaker) and John van Neumann were of the general opinion even way back then, that the human brain and computers were similar and that it seemed reasonable that human intelligence could likely be replicated in computer programs. That seems pretty insightful for a time when the Enigma machine was the most sophisticated mathematical machine on the planet. Can you expound on their predictions please and how it’s helped bring us to where we are today?
JR: Yes. So, Turing, a mathematician, first floated this idea in a paper back in 1936. Then more specifically in a later published journal in 1950. Turing’s ideas were picked up by Dartmouth mathematics professor John McCarthy, and in 1956 he and some close friends and colleagues formed a two-month long 10-man workshop to explore creating a thinking machine. McCarthy said he needed a name for the purpose of the workshop, so he coined the term “Artificial Intelligence.” However, his friends didn’t much like the moniker because they imagined creating a genuine, not artificial thinking machine. Their vision was something closer to General or Super AI in today’s parlance. That workshop kind of jump-started the pursuit of AI in earnest in the 1960s. Since then, there have been many fits and starts. Lots of activity in the 1970s, a drop-off in the 1980s, renewed activity in the 90’s, and an explosion in the 2000s. It tracked along with the growth and change in computer technology and especially the internet, and now with quantum computing.
But it wasn’t a straight line from within a single discipline. It was a convergence. During the two-decade period between 1936 and 1956, many new philosophies and sciences emerged. Examples include new philosophies of mind and neuro-philosophy, neuroscience, neurobiology, cognitive science, linguistic science, and information theory – and obviously the development of the computer. Before 1936 . . . really, up until around 1950 . . . in the popular mind a “computer” was someone with a slide rule like Dad made us learn how to use. The first commercial mainframe computer came along in 1951. This convergence is the background that allowed the 1956 workshop group to begin imagining Artificial Intelligence in earnest.
RR: I certainly remember Dad and his slide rule tutorials! I think you also told me that Generative AI alone is estimated to raise global GDP by $7 trillion and lift productivity growth by 1.5 percent over a ten-year period, assuming it’s widely adopted. That’s pretty staggering. Can you connect the dots on that a bit and perhaps give us some examples of how AI implementations are directly responsible for economic growth.
JR: Well, that’s right. But no one really knows, of course. That $7 trillion number is an estimate made by the 2023 Stanford Emerging Technology Review. Just a couple of short comments on this. Since the launch of ChatGPT in the fall 2022, we’ve been in a steep, almost vertical take-up curve. This is normal and predictable with new technologies – it’s called the “hype cycle,” where there is a lot of noise (hype) around the introduction. As the hype curve reaches the top there’s a plateau. If advancements and new applications prove out, the curve starts over, and growth continues. That’s what many believe will happen with Generative AI. We are definitely in that steep vertical climb phase now and new uses for it are being discovered every day. It looks transformative. Especially, or at least in terms of efficiency – doing things much, much faster and with that, wholly new things, period. Like the discovery of new drugs (for example, the mRNA vaccine during Covid) and new diagnostics (for example, using AI to analyze vast databases of MRIs for cancer diagnosis and treatment; and doing it so fast that a whole medical school of doctors could not achieve the same thing in 100 lifetimes). The associated projections of economic growth are speculative. But as long as the negative downsides don’t derail things, downsides such as intellectual property issues, bias, and bad actors, the potential uses of GenAI are so vast that its contribution to economic growth will be enormous.
There are other issues that could constrain the growth, however. Not with its take-up but with its requirements. AI requires an array of special material natural resources, which requires exploration, extraction, shipment, and refinement. This raises some ecological concerns. AI also requires huge amounts of datacenters and electricity to power them – in 2022 data centers made up about 2.5% of electricity demand in the U.S. Some predict this will rise to 20% by 2030, and that AI will account for three-quarters of the demand; this will demand strengthening power grids around the world. Then there’s the human cost concern. The extraction of minerals and the manufacture and assembly of AI driven devices typically uses low-cost labor to make the growth cost effective. There are serious ethical concerns surrounding this issue. All of these issues show a kind of underbelly to AI. It is not clear whether they will constrain AI growth.
RR: It’s all incredibly fascinating and at times, hard to even comprehend where it all may take us as a species, let alone in various business sectors. But because we’re an aviation magazine focused on jet aircraft interiors, might you give us some examples of how our industry sector is already benefitting from AI and perhaps pontificate on where it may ultimately take us?
JR: Okay. I’ll tell you what I know, which is limited and of course fluid. As I’m sure you and your readers know, aviation as a whole has been using AI in various ways for some time now. For example, in the defense and the commercial aviation industries with the development and use of drones and with pilotless planes. It is also being used a lot in airline maintenance (predictive maintenance) and project management in aircraft manufacturing, like at Boeing and Airbus and increasingly in modification centers for defense, commercial, and private airplane construction and rehab. In and through these, logistics and supply-chain management runs through them all, for example for parts and inventory management. Also, AI is used in simulations for airline traffic control systems and flight simulators for pilot training. All of these applications utilize the machine learning components of AI, where data is developed and maintained to allow for rapid analysis and pattern recognition suggestions for design and development, including in avionics.
I have also come across a few articles about AI in the aircraft interiors space. Some that relate to visual scenario-based concept design using Generative AI – including hard surface finishes, fabric, carpet textures, and color palette scenarios, to name a few. Most are not great at this point. They often take more time to correct or refine than the effort is worth. But as they come along, I imagine there will need to be trained specialists to use them, as in the cases of CAD, Photoshop, 3D Max, etc. There are also the new AI-assisted project tools, such as those used in the aircraft manufacturing space. These will likely be applied more and more to the interiors space, if not already. Other areas where I can imagine AI might be applied include increased collaboration between interior designers and the engineering and certification teams. It all depends on the data that can be garnered to build machine learning use cases. I can foresee, for example, that if the data of enough finished interiors (the data) could be built into an AI machine learning and Generative AI tool, a whole interior, top-to-bottom, could be built and generated with all the necessary specs and certification requirements. It’s also reasonable to think that AI generated detail design analyses for use in leading up to what you call PDR and CDR, could be widely utilized at some point. That might be a stretch. But it’s certainly not out of the question.
It's important that designers begin to imagine and look for new tools that use AI. It could transform the space. At minimum, embracing AI will help maintain competitiveness and possibly create competitive advantage.
RR: As with most businesses, supply chain is critical in meeting the demands of customers, be it commercial aircraft or private. Can you give us an example of how AI is currently being used to improve and / or predict supply chain with respect to aircraft OEM production and/or interior outfitting?
JR: Well, we touched on that in the previous question but in simple terms, it’s all about data. Machine learning and predictive AI are all built on specific sets of data. Think of how ChatGPT generates an image just by typing in a simple sentence. Combining natural language processing (NLP), image classification, and Limited Memory AI, it searches the entire internet database and generates a new image from that data set. But for specific industry applications, the data set is smaller and must be built from existing and ever-accumulating data. This sometimes means industry-wide and at other times (say for proprietary or competitive reasons), the data set might be built from within a company. I once did a small project within the private aviation interior space. That space appeared fractured, that is with lots of players, many boutique in character – which in many ways seems to me quite valuable, especially when the client base consists of wealthy private owners. But when it comes to specifying OEM parts for a design, or even ordering fabrics from here or there, a fractured space can make it harder.
One of, if not the greatest value of digital magazines like Freshbook is bringing together of all relevant parties. What would be of even greater value would be an ever-growing reservoir of data within your industry sector developed into an AI to facilitate greater efficiency across the supply chain – from specification availability, when, from where, and arrival prediction. But again, it all comes down to data.
RR: It seems that a pivotal point in the evolution of AI is the juncture at which machines become “self-aware.” Although that has little to do with aircraft, I still think our readers would love a chance to better understand the implications of that and why respected figures like Elon Musk and Alan Bundy are offering rather gloomy predictions of that eventuality. Can you please give a sense about where the most popular theories currently rest?
JR: As I mentioned before, we only have Artificial Narrow (or Weak) AI today. But that’s not what the Dartmouth workshop crew in 1956 imagined. They wanted to create a machine with genuine intelligence. That would mean something like General (or Strong) or Super AI. That would mean a machine that has “the ability to understand its own internal conditions and traits along with human emotions and thoughts. It would also have its own set of emotions, needs and beliefs.” In March of 2023 Elon Musk, Apple co-founder Steve Wozniak and Pinterest co-founder Evan Sharp, along with 1,400 other tech leaders and academics signed a letter suggesting a pause in “Giant AI Experiments.” In it they endorsed the Asilomar AI Principles, which says “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” In other words, we are simply not ready for advanced AI – General or Super AI. We don’t know how to manage it nor how to think through the ethics of its use. Stephan Hawking had said something similar. The fear is partly that we don’t yet even know how to manage the current Artificial Narrow (Weak) Generative AI, like ChatGPT. The greatest fear of the signers of the pause letter is an out-of-control General AI and the potential catastrophes that could result. For others the even greater concern is what is called transhumanism.
These concerns are without doubt quite legitimate. To me personally, however, these fears cannot possibly be addressed, and the ethics surrounding them, if we don’t go back to the beginning and ask whether the basic assumption of a computational theory of mind is self-justifying. The question is not do brains seem to work analogously to computers. The question is, what are human beings and what is human consciousness? The theory behind Theory of Mind and Self-Aware AI is that human beings are in fact just machines, nothing more – under this theory transhumanism becomes an aspiration, a goal, not something to be feared. The problem is, such a view is not science. It’s a metaphor wrapped in a philosophical proposition without justification. Only by revisiting the philosophy can we hope to establish appropriate ethical boundaries for AI
James Roseman is business guy turned writer and lecturer following retirement. He enjoyed a career in banking, management and IT consulting, and still does management consulting for small to mid-sized companies
But most of Roseman's time is spent writing, lecturing at universities, teaching at his church, and serving on the board of two non-profits. He has written two books, Rediscovering God’s Grand Story: In a Fragmented World of Pieces & Parts, a non-fiction work and Habits of the Heart, a historical novel drawn from and based on his family.
Learn more: www.jmroseman.com.
www.alongthelight.com and the Habits of the Heart website & blog: www.habitsoftheheart.net.
Comments