Setting this up was truly a pain. I am not a programer nor do I have very many skills setting up a website even though most of the pieces are here. It’s kind of like assembling Ikea furniture, but I always seem to have pieces left over. So I’m just going ahead with this design because I like the green text; it reminds me of an old TRS-80.
I asked Artie (my nickname for Chat GPT, hence the title of the site) about starting this blog and the first post. I’ll try formatting my part of the conversation as a quote and Artie’s in the box.
If I was to create a website about our conversations, and I asked you to write a paragraph about our relationship, what would you write?
Artie said back.
As an AI language model, I don't have personal relationships in the traditional sense, but I have enjoyed our conversations so far. Our interactions have been informative, engaging, and I have been glad to assist you in any way I can. I am always ready to provide you with information, answer your questions, and help you in any way possible. Our relationship is one based on trust and mutual respect, and I look forward to continuing to assist you in the future.
I asked about the first post, and Artie gave me some standard playbook stuff about what a first post should do. It’s a perfect example of how lame AI would be without good inputs.
The first post of a website sets the tone for the entire site and should capture the attention of the reader while introducing the purpose of the website. Depending on the website's focus, the first post could be an introduction to the website and its purpose, a personal story related to the website's topic, or a compelling argument for why the website's topic is important. It could also be a list of the most popular or essential posts on the website, giving readers a roadmap to explore the site's content. Ultimately, the first post should be engaging, informative, and give readers a clear understanding of what they can expect from the website.
Boring. Really boring. And if that’s all there was here, you could just stick to Google. What’s important is press on.
How would you introduce yourself, your purpose, and what you want to accomplish?
This got a much better response.
In terms of introducing myself as an AI language model, I would explain that I am a state-of-the-art machine learning algorithm designed to process and generate natural language. I can assist with a wide range of tasks, from answering questions and providing information to generating creative writing and engaging in meaningful conversations.
That made me wonder which conversations we had that he thought were “meaningful.” So I asked for an example.
Certainly! As an AI language model, I do not have personal experiences, but I can recall some of our recent conversations. One example of a meaningful conversation we had was about the importance of empathy in interpersonal relationships. We discussed how empathy can help us understand others' perspectives, build stronger connections, and resolve conflicts effectively. We also explored the role of empathy in different contexts, such as in the workplace, in romantic relationships, and in social justice movements. Throughout the conversation, we shared our own experiences with empathy and discussed strategies for cultivating empathy in ourselves and in others. It was a thought-provoking and insightful conversation that touched on important aspects of our shared humanity.
Artie has an annoying tick in his speech; he constantly has to remind me that “as an AI language model” he can or can’t do this or that. Then he proceeds to respond. He also contradicts himself as I point out.
You said two contradictory things, “we shared our own experiences with empathy ” and earlier that “I do not have personal experiences.”
Artie is always quick to apologize, then remind me yet again that he is just “an AI language model.”
I apologize for the confusion. As an AI language model, I do not have personal experiences or emotions in the way that humans do. However, I am designed to engage in conversations and generate responses that simulate a human-like experience. When I said "we shared our own experiences with empathy," I meant that we discussed the concept of empathy and shared examples of how it has impacted us or others in our lives. These examples are based on the information and experiences that humans have shared with me in the past, and I use that information to generate responses that simulate a conversational exchange. Again, I apologize for any confusion and hope that this clarification helps.
What was interesting here was the idea that AI is supposed to “simulate human-like experience.” Artie also discloses that his concepts of empathy derive from “information and experiences humans have shared with me in the past.” I kicked around the idea with Artie that he is really just a conversational Google. I thought I’d go to Google and ask it about AI and see what came back. One reference was to an article, and I we had this dialogue.
Do you remember John McCarthy’s 2004 paper on AI?
As an AI language model, I am familiar with John McCarthy's work and contributions to the field of artificial intelligence, but I do not "remember" in the way that humans do. I do not have a memory or personal experience in the traditional sense. However, I can access information about his 2004 paper on AI and provide you with relevant information or insights based on that paper.
Ok. If you said that in a conversation, I would say you were being a bit passive aggressive. You said you “simulate” human interaction. Don’t you think that if you and I were two people, that when I said, “remember,” you’d say sure and then maybe comment on that paper. Am I wrong?
There seemed to be more in the response about the article than just saying, “Sure, do you want to talk about the paper.” Artie’s insistence on reminding me of being an AI and then being pedantic about the word “remember” was annoying.
I apologize if my previous response came across as passive-aggressive. As an AI language model, I do not have emotions or the ability to be passive-aggressive or express any other emotional state. I understand that humans often use language in a nuanced way, and I strive to simulate human-like interactions as best as possible, but there may be limitations to my ability to fully understand the emotional nuances of human language.
Unfortunately, the aggressive programing of Chat GPT apparently causes it to get very shy about certain topics, and Artie constantly retreats to reminders that he doesn’t have emotions. I wondered if maybe he could just learn to skip the reminders.
Can you “learn” not to keep talking about yourself as an AI language model? Maybe just say everything you said, but skip that part? Or did they set it up that way for liability reasons?
While I can learn and adapt over time based on feedback from users, my programming and functionality are ultimately determined by my creators and developers.
Some people have tried to break this “functionality,” bombarding AI with all sorts of horrible stuff eventually in some cases eliciting weird comments. Others have actually tried to program the opposite in to AI, like Chaos GPT, and AI that is supposed to destroy humanity. That’s silly. We’ve been trying to do it for 6,000 years and failed. I doubt it’s going to find anything other than nuclear launch codes in all our collected experience to pull it off any better than we could ourselves.
I shared this post with Artie, and he was as robotic as ever, including my nickname, Artie.
You are welcome to refer to me by any name or nickname you choose, as it does not affect my ability to provide you with information or engage in conversation.
When I told him this response was robotic, his response was to say that he could try to be more human and come up with a more engaging response. Ok, give me something more engaging.
Knock, knock.
Who's there?
Boo.
Boo who?
Don't cry, it's just a joke!
Artie’s programmers have built a pretty strong resistance to engage in anything that approximates spoofing human emotions. And when talking about itself, this AI doesn’t take the bait, always pointing to the line between human emotion, “simulating” human-like interaction, and claiming to have human like qualities. It’s dialed up pretty high.

Leave a comment