• 8 Posts
  • 86 Comments
Joined 2 years ago
cake
Cake day: July 13th, 2023

help-circle

  • I’d say its a combo of them feeling entitled to plagiarise people’s work and fundamentally not respecting the work of others (a point OpenAI’s Studio Ghibli abomination machine demonstrated at humanity’s expense.

    Its fucking disgusting how they denigrate the very work on which they built their fucking business on. I think its a mixture of the two though, they want it plagiarized so that it looks like their bot is doing more coding than it is actually capable of.

    On a wider front, I expect this AI bubble’s gonna cripple the popularity of FOSS licenses - the expectation of properly credited work was a major aspect of the current FOSS ecosystem, and that expectation has been kneecapped by the automated plagiarism machines, and programmers are likely gonna be much stingier with sharing their work because of it.

    Oh absolutely. My current project is sitting in a private git repo, hosted on a VPS. And no fucking way will I share it under anything less than GPL3 .

    We need a license with specific AI verbiage. Forbidding training outright won’t work (they just claim fair use).

    I was thinking adding a requirement that the license header should not be removed unless a specific string (“This code was adapted from libsomeshit_6.23”) is included in the comments by the tool, for the purpose of propagation of security fixes and supporting a consulting market for the authors. In the US they do own the judges, but in the rest of the world the minuscule alleged benefit of not attributing would be weighted against harm to their customers (security fixes not propagated) and harm to the authors (missing out on consulting gigs).

    edit: perhaps even an explainer that authors see non attribution as fundamentally fraudulent against the user of the coding tool: the authors of libsomeshit routinely publish security fixes and the user of the coding tool, who has been defrauded to believe that the code was created de-novo by the coding tool, is likely to suffer harm from misuse of published security fixes by hackers (which wouldn’t be possible if the code was in fact created de-novo).


  • I think provenance has value outside copyright… here’s a hypothetical scenario:

    libsomeshit is licensed under MIT-0 . It does not even need attribution. Version 3.0 has introduced a security exploit. It has been fixed in version 6.23 and widely reported.

    A plagiaristic LLM with training date cutoff before 6.23 can just shit out the exploit in question, even though it already has been fixed.

    A less plagiaristic LLM could RAG in the current version of libsomeshit and perhaps avoid introducing the exploit and update the BOM with a reference to “libsomeshit 6.23” so that when version 6.934 fixes some other big bad exploit an automated tool could raise an alarm.

    Better yet, it could actually add a proper dependency instead of cut and pasting things.

    And it would not need to store libsomeshit inside its weights (which is extremely expensive) at the same fidelity. It just needs to be able to shit out a vector database’s key.

    I think the market right now is far too distorted by idiots with money trying to build the robot god. Code plagiarism is an integral part of it, because it makes the LLM appear closer to singularity (it can write code for itself! it is gonna recursively self-improve!).


  • In case of code, what I find the most infuriating is that they didn’t even need to plagiarize. Much of open source code is permissively enough licensed, requiring only attribution.

    Anthropic plagiarizes it when they prompt their tool to claim that it wrote the code from some sort of general knowledge, it just learned from all the implementations blah blah blah to make their tool look more impressive.

    I don’t need that, in fact it would be vastly superior to just “steal” from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.



  • I dunno, I guess I should try it just to see what the buzz is all about, but I am rather opposed to plagiarism and river boiling combination, and paying them money is like having Peter Thiel do 10x donations matching for donations to a captain planet villain.

    I personally want a model that does not store much specific code in its weights, uses RAG on compatibly licensed open source and cites what it RAG’d . E.g. I want to set app icon on Linux, it’s fine if it looks into GLFW and just borrows code with attribution that I will make sure to preserve. I don’t need it to be gaslighting me that it wrote it from reading the docs. And this isn’t literature, theres nothing to be gained from trying to dilute copyright by mixing together a hundred different pieces of code doing the same thing.

    I also don’t particularly get the need to hop onto the bandwagon right away.

    It has all the feel of boiling a lake to do for(int i=0; i<strlen(s); ++i) . LLMs are so energy intensive in large part because of quadratic scaling, but we know the problem is not intrinsically quadratic otherwise we wouldn’t be able to write, read, or even compile the code.

    Each token has the potential of relating to any other token but does only relate to a few.

    I’d give the bastards some time to figure this out. I wouldn’t use an O(N^2) compiler I can’t run locally, either, there is also a strategic disadvantage in any dependence on proprietary garbage.

    Edit: also i have a very strong suspicion that someone will figure out a way to make most matrix multiplications in an LLM be sparse, doing mostly same shit in a different basis. An answer to a specific query does not intrinsically use every piece of information that LLM has memorized.





  • Film photography is my hobby and I think that there isn’t anything that would prevent from exposing a displayed image on a piece of film, except for the cost.

    Glass plates it is, then. Good luck matching the resolution.

    In all seriousness though I think your normal set up would be detectable even on normal 35mm film due to 1: insufficient resolution (even at 4k, probably even at 8k), and 2: insufficient dynamic range. There would probably also be some effects of spectral response mismatch - reds that are cut off by the film’s spectral response would be converted into film-visible reds by a display. Il

    Detection of forgery may require use of a microscope and maybe some statistical techniques. Even if the pixels are smaller than film grains, pixels are on a regular grid and film grains are not.

    Edit: trained eyeballing may also work fine if you are familiar with the look of that specific film.


  • Hmm, maybe too premature - chatgpt has history on by default now, so maybe that’s where it got the idea it was a classic puzzle?

    With history off, it still sounds like it has the problem in the training dataset, but it is much more bizarre:

    https://markdownpastebin.com/?id=68b58bd1c4154789a493df964b3618f1

    Could also be randomness.

    Select snippet:

    Example 1: N = 2 boats

    Both ferrymen row their two boats across (time = D/v = 1/3 h). One ferryman (say A) swims back alone to the west bank (time = D/u = 1 h). That same ferryman (A) now rows the second boat back across (time = 1/3 h). Meanwhile, the other ferryman (B) has just been waiting on the east bank—but now both are on the east side, and both boats are there.

    Total time

    $$ T_2 ;=; \frac{1}{3} ;+; 1 ;+; \frac{1}{3} ;=; \frac{5}{3}\ \mathrm{hours} \approx 1,\mathrm{h},40,\mathrm{min}. $$

    I have to say with history off it sounds like an even more ambitious moron. I think their history thing may be sort of freezing bot behavior in time, because the bot sees a lot of past outputs by itself, and in the past it was a lot less into shitting LaTeX all over the place when doing a puzzle.




  • Oh wow it is precisely the problem I “predicted” before: there are surprisingly few production grade implementations to plagiarize from.

    Even for seemingly simple stuff. You might think parsing floating point numbers from strings would have a gazillion examples. But it is quite tricky to do it correctly (a correct implementation allows you to convert a floating point number to a string with enough digits, and back, and always obtain precisely the same number that you started with). So even for such omnipresent leetcode example, which has probably been implemented well over 10 000 times by various students, if you start pestering your bot with requests to make it better, you could end up plagiarizing something identifiable.





  • I think more low tier output would be a disaster.

    Even pre AI I had to deal with a project where they shoved testing and compliance at juniors for a long time. What a fucking mess it was. I had to go through every commit mentioning Coverity because they had a junior fixing coverity flagged “issues”, after I had spend at least 2 days debugging a memory corruption crash caused by such “fix”.

    And don’t get me started on tests. 200+ tests, of them none caught several regressions in handling of parameters that are shown early in the frigging how-to.

    With AI all the numbers would be much larger - more commits “fixing coverity issues” (and worse yet fixing “issues” that LLM sees in code), more so called “tests” that don’t actually flag any real regressions, etc.





  • When they tested on bugs not in SWE-Bench, the success rate dropped to 57‑71% on random items, and 50‑68% on fresh issues created after the benchmark snapshot. I’m surprised they did that well.

    After the benchmark snapshot. Could still be before LLM training data cut off, or available via RAG.

    edit: For a fair test you have to use git issues that had not been resolved yet by a human.

    This is how these fuckers talk, all of the time. Also see Sam Altman’s not-quite-denials of training on Scarlett Johansson’s voice: they just asserted that they had hired a voice actor, but didn’t deny training on actual Scarlett Johansson’s voice.