• 0 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • In stasis, it bends but won’t break us
    Aurelia, majestic as a lion’s roar embodies her
    Terranean, threaten and shield us
    Endanger, recoil and rewind the tapes
    She’s born again
    […]
    Medusa, a haunting allure
    Elusive ectoplasmic residue
    She’ll dance for you
    Inducing hydro-cosmic dreams in you
    Of eternal blue















  • Tried the same prompt:

    I have a riddle for you. With a boat, how can a man and a goat get across the river?

    This riddle is reminiscent of classic river-crossing puzzles. The exact solution depends on the additional conditions or constraints that might be implied (like if there are other characters involved, such as a wolf or cabbage). Assuming it’s just the man and the goat, here’s a straightforward solution:

    1. The man takes the goat across the river.
    2. The man leaves the goat on the other side.
    3. The man returns alone to the original side to get the boat. So, in short, the man and the goat cross together in the boat, and then the man returns to the original side to get the boat. If there are additional details or constraints to consider, please let me know!

    The goal is to get both to the other side, not just the goat

    I see! If the goal is to get both the man and the goat to the other side of the river, here’s how it can be done:

    1. The man and the goat both get into the boat.
    2. The man rows the boat with the goat across the river.
    3. They both disembark on the other side of the river. In this way, both the man and the goat successfully reach the other side of the river together.

    Asking questions because you know the dataset is biased towards a particular solution isn’t showing the fault in the syatem, much like asking a human a trick question isn’t proving humans are stupid. If you want to test the logical reasoning you should try questions it is unlikely to have ever heard before, where it needs to actually reason on its own to come to the answer.

    And i guess people with anterograde amnesia cannot be intelligent, are incapable of cognition and are worthless, since they can’t form new memories


  • Prompt:

    Alice and Bob are identical twins, Bob is twice the age of Alice. How old are they? Show your thought process step by step.

    Answer:

    Given the problem statement that Alice and Bob are identical twins and Bob is twice the age of Alice, we need to determine their ages. Let’s analyze the information step by step.

    1. Identical Twins: Identical twins are born at the same time, so they are always the same age.
    2. Statement Analysis:
    • Bob is twice the age of Alice.
    • Since they are identical twins, Bob and Alice must have the same age.
    1. Logical Conclusion:
    • The only scenario where a person’s age can be twice their own age is if their age is zero. Thus, the only solution that satisfies both the condition of them being identical twins and the age relationship given in the problem is: Alice’s age = 0 Bob’s age = 0 Therefore, Alice and Bob are both 0 years old.

    Are there biases due to the training data? Yes
    Does that mean it is totally incapable of reason? No why would it?

    And the models aren’t trying to act like a specific person, but humans in general, so variations in writing styles in the data is quite irrelevant, as we’ve already seen it’ll usually adopt the writing style of the prompt, much like a writer will usually stick to their writing style throughout a book.

    Memories are not required for intelligence, and letting a model ramble to itself will just cause the entropy of the output to increase until it’s spewing gibberish, akin fo a human locked in solitary for long enough.


  • What if i say big fusion companies will take over the ai market since they have the energy to train better models, seems exactly as likely.

    Remember when GPUs stopped being available because openAI bought nvidia and AMD and took all the gpus for themselves?
    No? Weird, since gpus are needed for them to be able to deliver anything significantly better than what we have now 🤔


  • Yeah that was a hypothetical, if you had thoae things you would be able to create a true AGI (or what i would consider a true AGI at least)

    Text is basically just a proxy, but to become proficient at predicting text you do need to develop many of the cognitive abilities that we associate with intelligence, and it’s also the only type of data we have literal terrabytes of laying around, so it’s the best we’ve got 🤷‍♂️

    Regarding memory, the human mind can be viewed as taking in stimuli, associating that with existing memories, condensing that into some high level representation, then storing that, a llm could, with a long enough context window, look back at past input and output and use that information to influence it’s current output, to mostly the same effect.

    What do you mean throwing a black box into an analytical prediction machine? And what do you mean 5 years of development?


  • Not sure i’m fully understanding your point, are you saying that the large AI companies will create AIs that will create technologies beyond what everyone else is capable of, thus outcompeting everyone, effectively monopolizing every market and from there basically become the umbrella corporation?

    I would be very impressed if anyone managed to make an AI capable of innovation to that degree, but sure, in that case we would have to fall back on something like government oversight and regulations to keep the companies in check i suppose