lidd1ejimmy@lemmy.ml to Memes@lemmy.mlEnglish · 4 months agoOffline version of Chat GPTlemmy.mlimagemessage-square9fedilinkarrow-up1252arrow-down13
arrow-up1249arrow-down1imageOffline version of Chat GPTlemmy.mllidd1ejimmy@lemmy.ml to Memes@lemmy.mlEnglish · 4 months agomessage-square9fedilink
minus-squareneidu2@feddit.nllinkfedilinkarrow-up24·edit-24 months agoTechnically possible with a small enough model to work from. It’s going to be pretty shit, but “working”. Now, if we were to go further down in scale, I’m curious how/if a 700MB CD version would work. Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
minus-squareNoiseColor @lemmy.worldlinkfedilinkarrow-up5·4 months agoMight be a dvd. 70b ollama llm is like 1.5GB. So you could save many models on one dvd.
minus-squareIgnotum@lemmy.worldlinkfedilinkarrow-up4·4 months ago70b model taking 1.5GB? So 0.02 bit per parameter? Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs
minus-square9point6@lemmy.worldlinkfedilinkarrow-up4·4 months agoLess than half of a BDXL though! The dream still breathes
minus-squareNoiseColor @lemmy.worldlinkfedilinkarrow-up4·4 months agoAh yes probably the Smaler version, your right. Still, a very good llm better than gpt 3
minus-squareerrer@lemmy.worldlinkfedilinkEnglisharrow-up1·4 months agoIt is a DVD, can faintly see DVD+R on the left side
minus-squareNum10ck@lemmy.worldlinkfedilinkEnglisharrow-up3·4 months agoELIZA was pretty impressive for the 1960s, as a chatbot for psychology.
minus-squarelidd1ejimmy@lemmy.mlOPlinkfedilinkEnglisharrow-up1·4 months agoyes i guess it would be a funny experiment for just a local model
Technically possible with a small enough model to work from. It’s going to be pretty shit, but “working”.
Now, if we were to go further down in scale, I’m curious how/if a 700MB CD version would work.
Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
Might be a dvd. 70b ollama llm is like 1.5GB. So you could save many models on one dvd.
70b model taking 1.5GB? So 0.02 bit per parameter?
Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs
Less than half of a BDXL though! The dream still breathes
Ah yes probably the Smaler version, your right. Still, a very good llm better than gpt 3
It is a DVD, can faintly see DVD+R on the left side
ELIZA was pretty impressive for the 1960s, as a chatbot for psychology.
yes i guess it would be a funny experiment for just a local model