

What interests in particular do you think aren’t well represented?
What interests in particular do you think aren’t well represented?
I’ll be very interested to some day figure out what the explanation for this is. It’s extremely bizarre and very creepy. Also, it’s crazy that Internet access can just be whisked away so easily by the government. I guess satellite is just about the only way around that.
To be fair, the headline of this article did literally call it a birthday parade.
In the same sense that some users might post only articles about ICE in California, or only articles about hurricanes in Florida, I still think that’s not very strange. Some people are particularly invested in specific topics. Maybe the author is or is close to rape victims and is therefore especially interested in it. People dedicate their whole lives and careers to specific activist topics, so I don’t think it’s too strange for someone to dedicate most of their posting activity on one particular website to one. Anyways, I’m not sure what the ulterior motive would be here anyways - what do you think is the real reason for posting so many articles about rape?
But reasoning about it is intelligent, and the point of this study is to determine the extent to which these models are reasoning or not. Which again, has nothing to do with emotions. And furthermore, my initial question about whether or not pattern following should automatically be disqualified as intelligence, as the person summarizing this study (and notably not the study itself) claims, is the real question here.
Sorry, I can see why my original post was confusing, but I think you’ve misunderstood me. I’m not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement “AI doesn’t actually reason, it just follows patterns”. That is unscientific if we don’t know whether or “actually reasoning” consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It’s my personal, subjective feeling that human reasoning works by following patterns. But I’m not saying “AI does actually reason like humans because it follows patterns like we do”. Again, I see how what I said could have come off that way. What I mean more precisely is:
It’s not clear whether AI’s pattern-following techniques are the same as human reasoning, because we aren’t clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn’t we have studies to back up the direction we lean in one way or the other?
I think you and I are in agreement, we’re upholding the same principle but in different directions.
But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we’re not looking for emotional judgements - we’re trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I’m not sure that emotions have much relevance to the topic of AI or human reasoning and problem solving, at least not this particular aspect of it.
As for analogizing LLMs to sociopaths, I think that’s a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others’ feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we’re giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.
This sort of thing has been published a lot for awhile now, but why is it assumed that this isn’t what human reasoning consists of? Isn’t all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they’re “just” memorizing patterns don’t prove anything other than that, unless coupled with research on the human brain to prove we do something different.
The problem is the location of the steepness makes the difference between whether this means it’s easy first and slow progress later, or slow progress first and easy later. Is it like, x^1.5, or is it like ln(x)? Both are very steep at some point.
Yeah, I don’t think the phrase “learning curve” has any built-in suggestion, even culturally, to imply that the reasonable default assumption is one way or the other. I only ever heard learning curve to refer to something getting easier after awhile, which is indeed a valid curve
Yeah this is a common misunderstanding I’ve had to clarify to people as well, even people who work in tech. I support only using “Law” for things that are scientifically actually laws. I don’t even like to use it as a joke (Murphy’s Law) because, unbelievably, some people really do take that to be a law of the universe too.
Yeah, a lot of these things actually do make sense, just in a more precise way than even the people using them intend. Gravitational pull is also like this. Earth’s gravitational pull is not weak, it literally keeps everything on Earth tethered to it. More importantly, it happens as an intrinsic property of the Earth, the Earth doesn’t need to “try” to exert gravitational pull on things. Furthermore, gravitational pull attracts more mass which begets even more gravitational pull, like a snowball effect.
So gravitational pull is not about the strength of the force, but the fact that it is natural, effortless, and often forms a positive feedback loop (borrowing from another comment here lol).
So if I say someone at work has a lot of gravitational pull, I’m conveying that they do a good job of bringing other people into their area or work, that they naturally do it almost without even trying to, and that as their social influence grows, they just end up with even more social influence. It’s a really deep metaphor which is also physically accurate.
Hm, this is interesting. I only have a passing understanding of control theory, but couldn’t a positive feedback loop indeed be good when the output is always desirable in increased quantities? A positive feedback loop doesn’t necessarily lead to instability, like you said. So maybe this is just me actually-ing your actually, lol.
As for “more optimal”, oof, I say that a lot so maybe I’m biased. When I say that I’m thinking like a percentage. If optimal is X, then 80% of X is indeed more of the optimal amount than 20% of X. Yes, optimality is a point, but “more optimal” just seems like shorthand for “closer to optimal”. Or maybe I should just start saying that?
This reminds me of a professor I had who hates when people say something is “growing exponentially”, since he argued the exponent could be 1, or fractional, or negative. It’s a technically correct distinction, but the thing is that people who use that term to describe something growing like x^2, are not even wrong that it’s exponential. I feel like when it comes to this type of phrasing, it’s fine not to deal with edge cases, because being specific actually makes what is said more confusing.
“I’m in a negative feedback loop with respect to my laziness which will soon stabilize with me continually going to the gym daily, which is closer to optimal than before. As a result, my energy levels are going to increase exponentially, where the value of the exponent is greater than 1!”
Hmm. Now that I say it that doesn’t seem that crazy. Although I do still think some common “default settings” don’t do any harm.
So then can anything that produces dopamine be addictive? Can I get addicted to hugging my girlfriend, or addicted to reading books, or jogging? Or is there some threshold? Does the intensity per time matter, or just the intensity, or just the time? What about the frequency of exposure? Does any amount of dopamine release make me slightly more addicted to whatever it is, or is there some threshold that needs to be exceeded? Do dopamine-based addictions produce physical withdrawal symptoms, always, never, sometimes? Depending on what? And are physical withdrawal symptoms necessary to constitute addiction or are there different tiers of addiction?
You see what I’m getting at. There’s sooo many questions that need to be answered before just saying “this produces lots of dopamine therefore it’s addictive and bad and should be limited”. While I appreciate and empathize with your sentiment about people cherry-picking the studies they like (sounding like an LLM here lol), it’s not as if science doesn’t know how to deal with that problem, and it certainly isn’t a reason to stop caring about or citing studies at all, or say “well you’ve got your studies and I’ve got mine”. Just because both sides have studies that give evidence in their favor doesn’t mean both sides are equally valid or that it’s impossible to reach an informed conclusion one way or the other.
My next biggest question (and what I’m trying to drive at with the semi-rhetorical slew of questions I opened with) would be what makes something an addiction or not? Am I addicted to staying alive, because I’ll do anything to stay alive as long as possible? That seems silly to call an addiction, since it doesn’t do any harm. And how do we delineate between, say, someone who is addicted to playing with Rubik’s Cubes vs. someone who just really likes Rubik’s Cubes and has poor self-control? Or what about someone with some other mental quirk, like someone who plays with Rubik’s Cubes a lot due to OCD, or maybe an autistic person who plays a lot with Rubik’s Cubes out of a special interest? Does the existence of such people mean that “Rubik’s Cube Addiction” is a real concern that can happen to anyone who plays with Rubik’s Cubes too much? Or perhaps Rubik’s cubes are not addictive at all, and it is separate traits driving people to engage with them in a way that appears addictive to others.
I know I’ve written a long post and asked lots of questions. It’s not my intention to “gish gallop” you, just to convey my variety of questions. The Rubik’s example is the one thing I’m most curious to hear your thoughts on. (There I go sounding like an LLM again)
If every person who disagrees with you counts as further evidence that you’re right, then you’re thinking in an unfalsifiable manner, which is the basis for many a flawed conclusion. It doesn’t necessarily make you wrong, but you should really make sure to find justifications for your beliefs that are based on falsifiable reasoning instead. That’s the best way to know if what you’re believing is right or wrong, because you can try to falsify your beliefs in the way that you know them to be falsifiable, and if they still couldn’t be falsified, then you can say “Well, I tried to disprove this, and it still passed that test!”
So, let me ask you this, what would, hypothetically, suffice to prove or at least suggest evidence that porn addiction does not exist? If your answer is “nothing”, then you’re in unfalsifiable territory.
Oh, yeah, I know. My issue is more about the word being reused so much. Whenever I see a word take off memetically like that I feel like it’s usually accompanied by a lack of deep thought. Almost like a thought-terminating cliche.
Yeeees although I feel like I’m walking into a trap rn
I’m more sick of hearing “slop slop slop slop slop” than I am of hearing about AI at this point. People sling slop around like it’s some sort of brave, heroic, destructive insult, leaving AI users in tears and shambles in its wake. Ironic considering a complaint against AI is that it regurgitates the same characteristic bit of content over and over again mindlessly. But even ChatGPT would have the writing skill to cycle in some other adjectives, my goodness.
Yeah, I ask because I’d really like to start moderating or contributing to some type of community that is very popular outside of Lemmy but not currently on Lemmy much. Art seems like a good one. Cooking too potentially. I wonder what would bring the most new visitors to Lemmy?