• 0 Posts
  • 51 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • This stuff is getting pushed all the time in Obsidian plugins (note taking/personal knowledge management software). That kind of drives me crazy because the whole appeal of the app is your notes are just plain text you could easily read in notepad, but some people are chunking up their notes into tiny, confusing bite-sized pieces so it’s better formatted for a RAG (wow, that sounds familiar)

    Even without a RAG, using LLMs for searching is sketchy. I was digging through a lot of obscure Stack Overflow posts yesterday and was thinking, how could an LLM possibly help with this? It takes less than a second to type in the search terms and you just have to look at the titles and snippets of the results to tell if you’re on the right track. You have the exact same bottleneck of typing and reading, except with ChatGPT or Copilot you also have to pad your query with a bunch of filler and read all the filler slop in the answer as it streams in a couple thousand times slower than dial-up. Maybe they’re more equal with simpler questions you don’t have to interrogate, but then why even bother? I’ve seen some people who say ChatGPT is faster, easier, and more accurate than Stack Overflow and even two crazy ones who said it’s completely obsolete and trying to understand that perspective just causes me psychic damage.


  • I’m in the same boat. Markov chains are a lot of fun, but LLMs are way too formulaic. It’s one of those things where AI bros will go, “Look, it’s so good at poetry!!” but they have no taste and can’t even tell that it sucks; LLMs just generate ABAB poems and getting anything else is like pulling teeth. It’s a little more garbled and broken, but the output from a MCG is a lot more interesting in my experience. Interesting content that’s a little rough around the edges always wins over smooth, featureless AI slop in my book.


    slight tangent: I was interested in seeing how they’d work for open-ended text adventures a few years ago (back around GPT2 and when AI Dungeon was launched), but the mystique did not last very long. Their output is awfully formulaic, and that has not changed at all in the years since. (of course, the tech optimist-goodthink way of thinking about this is “small LLMs are really good at creative writing for their size!”)

    I don’t think most people can even tell the difference between a lot of these models. There was a snake oil LLM (more snake oil than usual) called Reflection 70b, and people could not tell it was a placebo. They thought it was higher quality and invented reasons why that had to be true.

    Orange site example:

    Like other comments, I was also initially surprised. But I think the gains are both real and easy to understand where the improvements are coming from. [ . . . ]

    I had a similar idea, interesting to see that it actually works. [ . . . ]

    Reddit:

    I think that’s cool, if you use a regular system prompt it behaves like regular llama-70b. (??!!!)

    It’s the first time I’ve used a local model and did [not] just say wow this is neat, or that was impressive, but rather, wow, this is finally good enough for business settings (at least for my needs). I’m very excited to keep pushing on it. Llama 3.1 failed miserably, as did any other model I tried.

    For story telling or creative writing, I would rather have the more interesting broken english output of a Markov chain generator, or maybe a tarot deck or D100 table. Markov chains are also genuinely great for random name generators. I’ve actually laughed at Markov chains before with friends when we throw a group chat into one and see what comes out. I can’t imagine ever getting something like that from an LLM.




  • Friends don’t let friends OSINT

    i can stop any time I want I swear

    The youtube page you found is less talked about, though a reddit comment on one of them said “anyone else thinking burntbabylon is Luigi?”. I will point out that the rest of his online presence doesn’t really paint him as “anti tech” overall, but who can say.

    apparently there was an imposter youtube channel too I missed

    https://www.cnn.com/us/live-news/brian-thompson-unitedhealthcare-death-investigation-12-9-24#cm4hp9zyk000m3b6nobdudr15

    not sure what his official instagram is, but I saw a mention of the instagram account @nickakritas_ around the beginning of his channel (assuming it’s his). didn’t appear in the internet archive though.

    also saw these twitter & telegram links to promote his channel, the twitter one was deleted or nuked (I use telegram to talk with friends who have it but the lack of content removal + terrible encryption means I don’t touch unknown telegram links with a 10ft pole, so I have no idea what’s in there):

    I missed a couple videos which survived on the internet archive but I couldn’t make it through 5 seconds of any of them. one of them (“How Humans Are Becoming Dumber”) cites that tech priest guy Gwern Branwen and “Anti-Tech” was gone from the channel name by then. he changed the channel name a lot so maybe he veered away from it being an anti-tech channel?

    edit: channel names were a little wrong, I put them in the parent comment


  • EDIT: this probably isn’t him, but I’ll leave it up. the real account appears to be /u/mister_cactus

    Unsure where to put this or if it’s even slightly relevant, but I’ve had some fun looking up the UH shooter guy.

    I think I’ve found both his Reddit account and YouTube channel (it’s been renamed a couple times). Kinda just wanted to see how much I could dig up for the hell of it. Big surprise that he’s completely nuts

    He got raked over the coals for this: https://www.reddit.com/r/collapse/comments/126vycx/why_scientists_cant_be_trusted/

    https://api.pullpush.io/reddit/search/comment/?author=burntbabylon

    edit: reasoning and more details

    here’s my chain of reasoning to get to the youtube channel:

    • his goodreads review quotes a reddit comment
    • the reddit comment is in a small thread where the OP deleted their account
    • since the thread is small, the OP has probably seen most of the comments
    • the wayback machine shows the author as burntbabylon
    • that user linked to and defended a video from a very small youtube channel and everyone else on /r/collapse thought it was crazy
    • some of the ted kaczynski analysis videos came out right before his goodreads review

    his early channel had some thumbnails made for him by ‘bastizopilled’, an ironic/unironic “bastizo futurist” whose does interviews in a black mask with a gun on him. he leads right into a bunch of other groypers and the guy in the screenshot I posted below. kind of wonder if that ‘black mask with a gun’ aesthetic influenced the clothes he brought to the shooting.

    the channel names he used in 2023:

    • @NickAkritas, Nick Akritas (January)
    • @NicksEssays, Nick’s Essays (January ~20th)
    • @AntiTechCabin, Anti-Tech Cabin (early March)
    • @Cabin_Club, AntiTechCabin
    • @Cabin_Club, Cabin Club (March ~18th)
    • @CabinProductions_, Cabin Productions (June)
    • @Laconian_, Laconian (September)
    • @NicholasLaconian, Laconian (November)

    here’s a big pile of crazy tags he wrote on one of those videos (were people still writing tags in their video descriptions in 2023?):

    unabomber, kaczynski, ted kaczynski, unabomber cabin, kasinski, kazinski, industrial society and and its future, unabomber manifesto, the industrial revolution and its consequences, transhumanism, futurism, anprim, anarchoprimitivism, anarchism, leftism, liberalism, chad haag, nick akritas, gerbert johnson, hamza, anti tech collective, what did ted kaczynski believe, john doyle, hasanabi, self improvement, politics, jreg, philosophy, funny tiktok, kaczynski edit, ted kaczynski edit, zoomer, doomer, A.I. art, artifical intelligence, elon musk, AI art, return to tradition, embrace masculinity, reject modernity, reject modernity embrace masculinity, reject modernity embrace tradition, jReg, Greg Guevara, sam hyde, oversocialized, oversocialization, blackpilled, modernity, the industrial revolution, self improvement


    edit again: holy shit these people all suck. assuming the youtube channel is the shooter, he’s a friend-of-a-friend of this guy:

    and if that’s true, he’d be a friend-of-a-friend-of-a-friend of nick fuentes






  • RationalWiki really hits that sweetspot where everybody hates it and you know that means it’s doing something right:

    From Prolewiki:

    RationalWiki is an online encyclopedia created in 2007. Although it was created to debunk Conservapedia and Christian fundamentalism,[1] it is also very liberal and promotes anti-communist propaganda. It spreads imperialist lies and about socialist states including the USSR[2] and Korea[3] while uncritically promoting narratives from the CIA and U.S. State Department.

    From Conservapedia:

    RationalWiki.org is largely a pro-SJW atheists website.

    [ . . . ]

    RationalWikians have become very angry and have displayed such behavior as using profanity and angrily typing in all cap letters when their ideas are questioned by others and/or concern trolls (see: Atheism and intolerance and Atheism and anger and Atheism and dogmatism and Atheism and profanity).[33]

    From WikiSpooks (with RationalWiki’s invitation for anyone to collaborate highlighted with an emotionally vulnerable red box for emphasis):

    Although inviting readers to “register and engage in constructive dialogue”, RationalWiki appears not to welcome essays critical of RationalWiki[3] or of certain official narratives. For example, it is dismissive of the Journal of 9/11 Studies, terming it, as of 2017, it a “peer- crank-reviewed, online, open source pseudojournal”.[4]

    And a little bonus:

    “Can I have Google discount my rationalwiki entry, has errors posted out of spite 10 years ago”

    https://support.google.com/websearch/thread/106033064/can-i-have-google-discount-my-rationalwiki-entry-has-errors-posted-out-of-spite-10-years-ago?hl=en

    My site questions Darwinism but that’s become quite mainstream. But my rationalwiki page has over 20 references to me being a creationist, and is tagged “pseudoscience.” Untrue










  • I know this shouldn’t be surprising, but I still cannot believe people really bounce questions off LLMs like they’re talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery

    I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, “Hallucination is Inevitable: An Innate Limitation of Large Language Models”, submitted on 22 Jan 2024.

    It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.

    Then he immediately follows up with:

    Then I started to discuss with o1. [ . . . ] It says yes.

    Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].

    Then I asked o1 [ . . . ], to which it says yes too.

    I’m not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.