I believe lighting plays a very important part in making a scene realistic when it comes to creating one artificially, like in 3D modelling. That is why I also think the lighting of these AI generated images is the prime source of what impresses people about these images since no matter how unrealistic or distorted the subject is, the lighting makes it look like a natural part of the background. This is clearly different from photos like from poorly Photoshopped ones where the subject feels deliberately inserted into the scene from a cutout.

I am interested to understand how LLMs understand the context of the lighting when creating images. Do they make use of samples which happen to have the exact same lighting positions or do they add the lighting as an overlay instead? Also, why is it that lighting doesn’t look convincing in some cases like having multiple subjects together etc.?

  • davidgro@lemmy.world
    link
    fedilink
    arrow-up
    36
    ·
    6 months ago

    It certainly doesn’t always get it right - I’ve seen subjects lit by bright sunlight in a nighttime background, or just from a wildly different direction, but within a subject the lighting usually seems consistent.

    I’ve wondered the same thing myself, my assumption is that it just correlates how lighting works across millions of training images, much like how it manages to get gravity right most of the time.

    • lets_get_off_lemmy@reddthat.com
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      6 months ago

      I’m an AI researcher and yes, that’s basically right. There is no special “lighting mechanism” portion of the network designed before training. Just, after seeing enough images with correct lighting (either for text to image transformer models or GANs), it will understand what correct lighting should look like. It’s all about the distribution of the training data. A simple example is this-person-does-not-exist.com. All of the training images are high resolution, close-up, well-lit headshots. If all the training data instead had unrealistic lighting, you would get unrealistic lighting out. If it’s something like 50/50, you’ll get every part of that spectrum between good lighting and bad lighting at the output.

      That’s not to say that the overall training scheme of especially something like GPT-4 doesn’t include secondary training operations for more complex tasks. But lighting of images is a simple thing to get correct with enough training images.

      As an aside, I said that website above is a simple example, but I remember less than 6 years ago when that came out and it was revolutionary, so it’s crazy how fast the space has moved forward in such a short time.

      Edit: to answer the multiple subjects question: it probably has seen fewer images with multiple subjects and doesn’t have enough “knowledge” from it’s training data to accurately apply lighting in those scenarios. And you can imagine lighting is more complex in a scene with more subjects so it’s more difficult for the model to use a general solution it’s seen many times to fit the more complex problem.

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 months ago

    First, sometimes the lighting is terrible if you look. Like shadows going one way for some objects, another way for others.

    But generative AI is generally extrapolating from its training data. It gets lighting right (when it does) because it’s processed a giant number of images, and when you tell it you want a picture of a puppy on the beach at sunset, it’s got a million pictures of puppies, and a million pictures of things on the beach at sunset. It doesn’t know if it’s right or not, but it’s mimicking those things.

  • Sethayy@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    ·
    6 months ago

    To get a bit more technical, they build images in passes; each becoming more coherent than the last. This gives thsm a boost to understanding understand how ‘things’ relate to one another, knowing them by nothing but that relation. Light + light source being an example, and the angle of lighting being another deeper layer of that a - completed on a less noisy pass.

    This is how its able to build images from its training parts, it sorta understands how each of them relate to some things, so its able to sorta organizes an image of random noise each pass, eventually creating a ‘unique’ image inspired by its training data.

    It also gives it that perfect image you mention, cause its specifically trained to look like what looks good to us - its essentially a function optimised on nothing but.

  • mkwt@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    6 months ago

    The deal with LLMs is that it’s very difficult to say which piece of training material went into which output. Everything gets chopped up and mixed, and it’s computationally difficult to run backwards.

    My understanding of the image generators is that they operate one pixel at a time too, looking only at neighboring pixels. So in that sense, it’s not correct to say they understand the context of anything.

    • Pyro@pawb.social
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      6 months ago

      It kinda understands context.

      An image generator makes an image of static similar to like a TV does with bad signal. The Ai looks atthe static and sees shapes in it. The prompt influences what it’s trying to “see”. It starts filling in the static to a full image, it does this in steps, more steps generally means a better quality image.

      Also to say a LLM is a Large Language Model and is different from an image generator, though the proccess for them is very similar.