Hey wait I’m one of those :D
life is beautiful because there's neurodivergent lesbains on the internet
My friend’s little brother (non-verbal) used to hide people’s shoes if he liked the person, because it meant they had to stay longer. The more difficult it was to find your shoes, the more he liked you.
One day my cousin came over, and she was a bitch. When it was time to leave, my friend’s brother handed her shoes directly to her and she went on and on about how he must have a crush on her because he only “helped” her.
All fancy smancy generative ai models know how to do is parrot what they’ve been exposed to.
A parrot can shout words that kind of make sense given context but a parrot doesn’t really understand the gravity of what it’s saying. All the parrot knows is that when it says something in response to certain phrases it usually gets rewarded with attention/food.
What a parrot says is sometimes kinda sorta correct/sometimes fits the conversation of humans around it eerily well but the parrot doesn’t always perfectly read the room and might curse around a child for instance if it usually curses around its adult owners without facing any punishment. Since the parrot doesn’t understand the complexities of how we don’t curse around young people due to societal norms, the parrot might mess that up/handle the situation of being around a child incorrectly.
Similarly AI lacks understanding of what it’s saying/creating. All it knows is that when it arranged pixels or words in a certain way after being given some input it usually gets rewarded/gets to survive and so continues to get the sequence of words/pixels following a prompt correct enough to imitate people convincingly (or that poorly performing version of itself gets replaced with another version of itself which is more convincing).
I argue that a key aspect of consciousness is understanding the gravity and context of what you are saying — having a reason that you’re saying or doing what you are doing more than “I get rewarded when I say/do this.” Yes AI can parrot an explanation of its thought process (eli5 prompting etc) but it’s just mimicking how people explain their thought process. It’s surface level remixing of human expression without understanding the deeper context of what it’s doing.
I do have some untested ideas as to why its understanding is only surface level but this is pure hypothesis on my part. In essence I believe humans are really good at extrapolating across scales of knowledge. We can understand some topics in great depth while understanding others similarly on a surface level and go anywhere in between those extremes. I hypothesize we are good at that because our brains have fractal structure to them that allows us to have different levels of understanding and look at some stuff at a very microscopic level while still considering the bigger picture and while fitting that microscopic knowledge into our larger zoomed out understanding.
I know that neural networks aren’t fractal (self-similar across various scales) and can’t be by design of how they learn/how data is passed through them. I hypothesize that makes them only understand the scale at which they were trained. For LLM’s/GAN’s of today that usually means a high level overview of a lot of various fields without really knowing the finer grain intricacies all that well (see how LLM’s make up believable sounding but completely fabricated quotes for long writing or how GAN’s mess up hands and text once you zoom in a little bit.
There is definitely more research I want to do into understanding AI and more generally how networks which approximate fractals relate to intellegence/other stuff like quantum physics, sociology, astrophysics, psychology, neuroscience, how math breaks sometimes etc.
That fractal stuff aside, this mental model of generative AI being glorified parrots has helped me understand how AI can seem correct on first glance/zoomed out yet completely fumble on the details. My hope is that this can help others understand AI’s limits better and therefore avoid putting too much trust into to where AI starts to have the opportunity to mess up serious stuff.
Think of the parrot cursing around children without understanding what it’s doing or why it’s wrong to say those words around that particular audience.
In conclusion, I want us to awkwardly and endearingly laugh at the AIs which mimic the squaks of humans rather than take what it says as gospel or as truth.
I just wanna feel soft fingers and sharp nails below my chin, drawing my attention up to a beautiful fem and then watch as they lean in, meanwhile my vision thins; fireworks ignite within.
My hair tangling around their touch, I’m totally undone. They pull away, my heart aches as I know I must wait. I feel a soft exhale of warmth before they pull me in again, my brain oozing away as I know I’m theirs for the rest of today. I’m so so lucky to have such a lovely fae~
3 am which is both day and night and simultaneously neither
Night
m1=52.3 m2=97.4 m3=62.6 (solar masses) v1x=-2.801 v1y=-1.534 v2x=5.925 v2y=2.449 v3x=-0.815 v3y=-2.604 (km/s) x1=-35.0 y1=-17.0 x2=32.0 y2=9.0 x3=1.0 y3=-28.0 (AU from center) Music: Prelude in C-Sharp Minor – Rachmaninoff
Autism is an evolutionary advantage because if I wasn't talking to myself walking home at 2am it wouldn't have scared off the skunk waiting around a corner to spray me
Edward Lutczyn (1975)
Java is a trash language that should burn in the parts of hell where hitler is
Rust on the other hand is a bratty lil language that should burn in the parts of hell where queers party
20, They/ThemYes I have the socks and yes I often program in rust while wearing them. My main website: https://zephiris.me
132 posts