• shneancy@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      3 months ago

      mmm, unironically sounds like me. According to my iq test i had PhD level intelligence at 18, and what am i doing at 24? unemployed, playing video games, and crying

      • Noodle07@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        3 months ago

        It’s alright, you can keep going for a bit, I’m about to hit 30 playing video-games and crying

        • shneancy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          nah IQ tests are more or less bullshit, they’re incredibly flawed and biased, only situation you’d see me talking about them at lenght is bashing the entire concept of trying to quantify general intelligence

    • Wooki@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      Wisdom implys it has intelligence, it does not. It’s a word predictor.

  • kn0wmad1c@programming.dev
    link
    fedilink
    English
    arrow-up
    37
    ·
    3 months ago

    Translation: GPT-5 will (most likely illegally) be fed academic papers that are currently behind a paywall

  • jas0n@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 months ago

    All aboard the hype train! We need to stop using the term “AI” for advanced auto complete. There is not even a shred of intelligence in this. I know many of the people here already know this, but how do we get this message to journalists?! The amount of hype being repeated by respectable journalists is sickening.

  • HexesofVexes@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 months ago

    Wow… They want to give AI even more mental illness and crippling imposter syndrome to make it an expert in one niche field?

    Sounds like primary school drop-out level thinking to me.

    • SturgiesYrFase@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      I know three people that dropped out of primary and did quite well. They all ended up taking remedial studies later in life. Two were in trades and the other was a postie. All three were smart as fuck. Just because life gets in the way of going to school doesn’t mean a person is dumb, just uneducated.

    • LeFantome@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 months ago

      If she wants it to. At some point, all the chat bots are going to be given bodies. We all know it.

  • Matriks404@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 months ago

    What a bunch of bullshit. I’ve asked ChatGPT recently to do a morphological analysis of some Native American language’s very simple sentences, and it gave absolute nonsense as an answer.

    And let’s be clear: It was an elementary linguistics task. Something that I did learn to do on my own by just doing a free course online.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    3 months ago

    So copying everyone else’s work and rehashing it as your own is what makes a PhD level intelligence? (Sarcastic comments about post-grad work forthcoming, I’m sure)

    Unless AI is able to come up with original, testable, verifiable, repeatable previously unknown associations, facts, theories, etc. of sufficient complexity it’s not PhD level…using big words doesn’t count either.

    • lad@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      I think they had some specific metric in mind when they said this. But on the other hand, this is kind of a “you’re here” situation, AI can’t do that now, there’s no telling that they can’t make it do that later. Probably it would be a much more useful AI at that point, too

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 months ago

    I like how they have no road map on how to achieve general artificial intelligence (apart from lets train LLMs with a gazillion parameters and the equivalent of yearly energy consumed by ten large countries) but yet pretend chatgpt 4 is only two steps away from it

    • Ignotum@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      Hard to make a roadmap when people can’t even agree on what the destination is not how to get there.

      But if you have enough data on how humans react to stimulus, and you have a good enough model, then you will be able to train it to behave exactly like a human. The approach is sound even though in practice there prooobably doesn’t exist enough usable training data in the world to reach true AGI, but the models are already good enough to be used for certain tasks

      • LANIK2000@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 months ago

        Thing is we’re not feeding it how humans react to stimulus. For that you’d need it hooked up to a brain directly. It’s too filtered and biased by getting text only, this approach naively ignores things like memory and assumes text messages exist in a vacuum. Throwing a black box into an analytical prediction machine, only works as long as you’re certain it’ll generally throw out the same output with the same input, not if your black box can suddenly experience 5 years of development and emerge a different entity. It’s skipping too many steps to become intelligent, I mean it literally skips the entire process between reading and writing.

        • Ignotum@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Yeah that was a hypothetical, if you had thoae things you would be able to create a true AGI (or what i would consider a true AGI at least)

          Text is basically just a proxy, but to become proficient at predicting text you do need to develop many of the cognitive abilities that we associate with intelligence, and it’s also the only type of data we have literal terrabytes of laying around, so it’s the best we’ve got 🤷‍♂️

          Regarding memory, the human mind can be viewed as taking in stimuli, associating that with existing memories, condensing that into some high level representation, then storing that, a llm could, with a long enough context window, look back at past input and output and use that information to influence it’s current output, to mostly the same effect.

          What do you mean throwing a black box into an analytical prediction machine? And what do you mean 5 years of development?

          • LANIK2000@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            The black box is the human that reads and outputs text and the analytical prediction machine is the AI. 5 years of development is the human living their life before retuning to continue writing. It is an extreme example, but I’m just tyring to point out that the context of what a person might write can change drastically between individual messages because anything can happened in between, and thus the data is fundamentally flawed for training intelligence, as that step is fully missing, the thought process.

            As to why I called the AI an analytical prediction machine, that’s because that’s essentially what it does. It has analyzed an unholy amount of random text from the internet, meaning conversations/blogs/books and so on, to predict what could follow the text you gave it. It’s why prompt injection is so hard to combat and why if you give it a popular riddle and change it slightly like “with a boat, how can a man and goat get across the river”, it’ll fail spectacularly trying to shove in the original answer somehow. I’d say that’s proof it didn’t learn to understand (cognition), because it can’t use logic to reason about a deviation from the dataset.

            As for memory, we can kind of simulate it with text, but it’s not perfect. If the AI doesn’t write it down, it didn’t happen and thus any thoughts, feelings or mental analysis stops existing upon each generation. The only way it could possibly develop intelligence, is if we made it needlessly ramble and describe everything like a very bad book.

            And thus to reach the beginning of your comment, I don’t belive it’s necessary to posses any cognitive abilities to generate text and in turn I don’t see it as evidence of us getting any closer to AGI.

            • Ignotum@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Prompt:

              Alice and Bob are identical twins, Bob is twice the age of Alice. How old are they? Show your thought process step by step.

              Answer:

              Given the problem statement that Alice and Bob are identical twins and Bob is twice the age of Alice, we need to determine their ages. Let’s analyze the information step by step.

              1. Identical Twins: Identical twins are born at the same time, so they are always the same age.
              2. Statement Analysis:
              • Bob is twice the age of Alice.
              • Since they are identical twins, Bob and Alice must have the same age.
              1. Logical Conclusion:
              • The only scenario where a person’s age can be twice their own age is if their age is zero. Thus, the only solution that satisfies both the condition of them being identical twins and the age relationship given in the problem is: Alice’s age = 0 Bob’s age = 0 Therefore, Alice and Bob are both 0 years old.

              Are there biases due to the training data? Yes
              Does that mean it is totally incapable of reason? No why would it?

              And the models aren’t trying to act like a specific person, but humans in general, so variations in writing styles in the data is quite irrelevant, as we’ve already seen it’ll usually adopt the writing style of the prompt, much like a writer will usually stick to their writing style throughout a book.

              Memories are not required for intelligence, and letting a model ramble to itself will just cause the entropy of the output to increase until it’s spewing gibberish, akin fo a human locked in solitary for long enough.

              • LANIK2000@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 months ago

                Let’s do the riddle I suggested, because we need something popular in the dataset, but present it with a deviation that makes it stupidly simple yet is unlikely to exist.

                Prompt:

                I have a riddle for you. With a boat, how can a man and a goat get across the river?

                Answer:

                Ah, that’s a classic riddle! The answer is: the man takes the goat across the river first and leaves it on the other side. Then he goes back, picks up the boat, and brings it across to the side where he left the goat. Finally, he leaves the goat there and takes himself across. That way, both the man and the goat safely cross the river without any issues. Do you have any other riddles or questions you’d like to discuss today?

                A normal human wouldn’t be fooled by this and say that they can just go across and maybe ask where the riddle is. They’d be likely confused or expect more. The AI doesn’t because it completely lacks the ability to reason. At least it ends up solved, that’s probably the best response I got when trying to make this point. Let’s continue.

                Follow up prompt:

                Why would he need to take the boat? Wouldn’t he need to have used the boat to get the goat across? And once the boat is there, why would he return to take him self?

                Answer:

                Ah, I see where the confusion lies! In this classic riddle scenario, the man and the goat can actually swim across the river without using the boat. The key is to ensure that either the man or the goat is always present on each side of the river while they are crossing together. Thank you for pointing out that oversight in my previous response! If you have any more riddles or questions to discuss, feel free to share them. How else can I be of assistance today?

                Final prompt:

                Why is that the key? What difference does it make? How can they be both present on both sides while crossing together?

                Final answer:

                I apologize for the confusion in my previous responses. Allow me to clarify the solution to this riddle: The key factor in this scenario is ensuring that either the man or the goat remains on each side of the river at all times to prevent any issues that may arise from leaving them alone together. Here’s how they can successfully cross together without using a boat: 1. The man takes the goat across the river first and leaves it on the other side. 2. The man returns alone and brings himself across to join the goat on that side. 3. The man then picks up the goat and carries it back across to the original side of the river. 4. Finally, he goes back one last time to bring himself across. By following this sequence, both can be present on each side while crossing together, ensuring that there are no conflicts or risks during their journey. Thank you for your patience, and I appreciate your attention to detail in understanding this riddle! If you have any more questions or topics you’d like to discuss, feel free to share them with me. How else can I assist you today?

                I think that’s quite enough, it’s starting to ramble like you said it would (tho much earlier than expected) and unlike the first solution, it doesn’t even end up solved anymore xD I’d argue this is a scenario that should be absolutely trivial and yet the AI is trying to assert information that I didn’t present and continues to fail to apply logic correctly. The only time it knows how to reason is when someone in its dataset already spelled out the reasoning to a certain question. If the logic doesn’t exits in the dataset, it has great difficulty making heads or tails of it.

                And yes, I’d argue memories are indeed absolutely vital to inteligence. If we want cognition, aka the process of acquiring knowledge and understanding, we need it to remember. And if it immediately loses that information or it erodes so quickly, it’s essentially worthless.

                • Ignotum@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  Tried the same prompt:

                  I have a riddle for you. With a boat, how can a man and a goat get across the river?

                  This riddle is reminiscent of classic river-crossing puzzles. The exact solution depends on the additional conditions or constraints that might be implied (like if there are other characters involved, such as a wolf or cabbage). Assuming it’s just the man and the goat, here’s a straightforward solution:

                  1. The man takes the goat across the river.
                  2. The man leaves the goat on the other side.
                  3. The man returns alone to the original side to get the boat. So, in short, the man and the goat cross together in the boat, and then the man returns to the original side to get the boat. If there are additional details or constraints to consider, please let me know!

                  The goal is to get both to the other side, not just the goat

                  I see! If the goal is to get both the man and the goat to the other side of the river, here’s how it can be done:

                  1. The man and the goat both get into the boat.
                  2. The man rows the boat with the goat across the river.
                  3. They both disembark on the other side of the river. In this way, both the man and the goat successfully reach the other side of the river together.

                  Asking questions because you know the dataset is biased towards a particular solution isn’t showing the fault in the syatem, much like asking a human a trick question isn’t proving humans are stupid. If you want to test the logical reasoning you should try questions it is unlikely to have ever heard before, where it needs to actually reason on its own to come to the answer.

                  And i guess people with anterograde amnesia cannot be intelligent, are incapable of cognition and are worthless, since they can’t form new memories

      • iAvicenna@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        The approach is not sound when all the other factors are considered. If AI continues along this approach it is likely that big AI companies will need to usurp next possible tech breakthroughs like quantum computing and fusion energy to be able to keep growing and produce more profit instead of these techs being used for better purposes (cheaper and cleaner household energy, scientific advances etc). All things considered excelling at image analysis, creative writing and digital arts wont be worth all the damage its going to cause.

        • Ignotum@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Usurp? They won’t be the ones to develop quantum computers, nor will they be developing fusion, if those technologies become available they might start using them but that won’t somehow mean it won’t be available for other uses.

          And seeing as they make money from “renting out” the models, they can easily be “used for better purposes”

          ChatGPT is currently free to use for anyone, this isn’t some technology they’re hoarding and keeping for themselves

          • iAvicenna@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 months ago

            By usurp I mean fill out all the available capacity for their own use (along with other tech giants who will be running the same moon race), assuming by that time they will be the largest tech giants of the time and have the financial means to do so.

            Don’t get me wrong the things that chatgpt can do are amazing. Even if hallucinates or cant really reason logically, it is still beyond what I would have expected. But when the time I mentioned above comes, people wont be given a choice between AI or cheaper energy/better health care. All that technological advancements will be bought to full capacity by AI companies and AI will be shoved down people’s throats.

            And yes chatgpt is free but it is only a business decision not a “for the good of the humanity” act. free chatgpt helps testing and generating popularity which in turn brings investment. I am not saying anything negative (or positive) about their business plan but dont think for a second that they will have any ethical concerns about leeching upcoming technological innovations for the sake of generating profit. And this is just one company. There will be others too, Amazon, Google, Microsoft etc etc. They will all aggressively try to own as much as possible of these techs as possible leaving only scraps for other uses (therefore making it very expensive to utilise basically).

            • Ignotum@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Not sure i’m fully understanding your point, are you saying that the large AI companies will create AIs that will create technologies beyond what everyone else is capable of, thus outcompeting everyone, effectively monopolizing every market and from there basically become the umbrella corporation?

              I would be very impressed if anyone managed to make an AI capable of innovation to that degree, but sure, in that case we would have to fall back on something like government oversight and regulations to keep the companies in check i suppose

              • iAvicenna@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                No, other people will generate technologies like quantum computing, fusion energy. Big AI companies will try to own (by buying them out) as much of these as possible because the current model of AI they are using requires these techs to be able to deliver anything significantly better than what they have now. So these tech advancements will basically be owned by AI companies leaving very little room for other uses.

                For these AI companies trying to go toward general AI is risky, as you said above it is not even well defined. On the other hand scaling up their models massively is a well defined goal which however requires major compute and energy innovations like those mentioned above. If these ever happen during like the next ten years or so big tech involved in AI will jump on these and buy as much of it as possible for themselves. And the rest will be mostly bought by governments for military and security applications leaving very little for other public betterment uses.

                • Ignotum@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  What if i say big fusion companies will take over the ai market since they have the energy to train better models, seems exactly as likely.

                  Remember when GPUs stopped being available because openAI bought nvidia and AMD and took all the gpus for themselves?
                  No? Weird, since gpus are needed for them to be able to deliver anything significantly better than what we have now 🤔

  • clearedtoland@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    Is it weird that I still want to go for my PhD despite all the feedback about the process? I don’t think I’ve ever met a PhD or candidate that’s enthusiastically said “do it!”

    • bluemellophone@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      It’s a lot of fucking work. If you enjoy hard work, learning about the latest advancements in your field, and can handle disappointment / criticism well, then it’s something to look into.

    • Contramuffin@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      3 months ago

      No, not weird at all. PhD’s are pain, but certain people like the pain. If you’re good with handling stress, and also OK with working in a fast-paced, high-impact environment (for real, not business talk BS), then it may be the right decision for you. The biggest thing that I would say is that you should really, really think about whether this is what you want, since once you start a PhD, you’ve locked the next 6 years of your life into it with no chance of getting out

      Edit: Also, you need to have a highly sensitive red-flag radar. As a graduate student, you are highly susceptible to abuse from your professor. There is no recourse for abuse. The only way to avoid abuse is by not picking an abusive professor from the get-go. Which is hard, since professors obviously would never talk badly about themselves. Train that red-flag radar, since you’ll need to really read between every word and line to figure out if a professor is right for you

    • VeganPizza69 Ⓥ@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      It’s like being drafted to a war while you only receive vague orders and you slowly realize what the phrase “war is a racket” means. You suffer and learn things that you didn’t plan on learning.

  • kemsat@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    3 months ago

    Having a PhD doesn’t say you’re intelligent. It says you’re determined & hardworking.

    • cabron_offsets@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 months ago

      Eh. Maybe. but don’t discount those phds who were pushed through the process because their advisors were just exhausted by them. i have known too many 10th year students. They weren’t determined or hardworking. They simply couldn’t face up to their shit decisions, bad luck, or intellectual limits.

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Not everywhere, I guess. The advisor needs to show good numbers to the stakeholders board, after all

            • lad@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 months ago

              I don’t know for sure, but my advisor used to worry about the amount of successful PhD graduates, because of the department quota, iirc

              on the other hand

              his quota was not too high, and he allowed me to enroll and was okay with me, although I said in advance that I will likely become a drop out

  • Zloubida@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    The fact that I have a PhD while I knew that I wouldn’t use it quickly after I begun, thus loosing years of my life is the proof that I’m dumb as a rock. Fitting for ChatGPT.

  • Etterra@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    If AI was that capable then using human workers would eventually become cost prohibitive. If we’re still stuck having to work to live under a capitalist system by then, there’s gonna be serious problems. A post-labor economy doesn’t need to charge for even a modestly comfortable standard of living, and the overwhelming majority of people will go looking for things to do no matter how many politicians swear otherwise.