• t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    This is just an extension of the larger issue of people not understanding how AI works, and trusting it too much.

    AI is and has always been about exchanging accuracy for speed. It excels in cases where slow, methodical work is not given sufficient time already, because the accuracy is already low(er) as a result (e.g. overworked doctors examining CT scans).

    But it should never be treated as the final word on something; it’s the first ~70%.

    • Sonori@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      Didn’t it turn out that the CT scan analysis thing was just the model figuring out the rough age of machine, becuse older machines tend to be in poorer places with more cancer and are more likely to only be used on serious illnesses?

      • ericjmorey@programming.devOP
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        If taking into account the older machines results in better healthcare, that seems like a great thing to be discovered as a result of the use of machine learning.

        Your summary sounds like it may be inaccurate, but it’s interesting enough for me to want to know more.

        • Sonori@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          I believe it was from a study on detecting Tuberculosis, but unfortunately google isn’t been very helpful for me.

          The problem with that would be that people in poorer areas are more at risk from TB is not a new discovery, and a model which is intended and billed as detecting TB from a scan should ideally not be using a factor like hospital is old and poor to determine if a scan has diseased tissue, given that intrinsically means your model is more likely to miss it in patients at better hospitals while over-diagnosing it in poorer ones, and that of course at risk people can still go to newer hospitals.

          A Doctor will take risk factors into consideration, but would also know that just because their hospital got a new machine doesn’t mean that their patients are now less likely to have a potentially fatal disease. This results in worse diagnosis, even if it technically scores better with the training set.

          • ericjmorey@programming.devOP
            link
            fedilink
            arrow-up
            0
            ·
            10 months ago

            A Doctor will take risk factors into consideration

            Unfortunately we see that the data doesn’t support this assumption. Poor populations are not given the same attention by doctors. Black populations in particular receive worse healthcare in the US after adjusting for many factors like income and family medical history.

            • Sonori@beehaw.org
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              10 months ago

              It’s unfortunately not certain that they will take such measures with their patients even though most try, and indeed ethic discrepancies are one of the things likely to be made worse with machine learning given that there is often little thought or training data given to them, but age of the hospitals machine is not a good proxy for risk factors. It might be statistically corralled, the actual patients risk isn’t. Less at risk people may go to a cheaper hospital, and more at risk people might live in a city which also has a very up to date hospital.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      I feel like I’ve been screaming this for so long and you’re someone who gets it. AI stuff right now is pretty neat. I’ll use it to get jumping off points and new ideas on how to build something.

      I would never ever push something written by it to production without scrutinizing the hell out of it.