• 177 Posts
  • 1.49K Comments
Joined 8 months ago
cake
Cake day: February 5th, 2024

help-circle
  • What? The bands already have a range, they don’t stay static on a single frequency in that band because of overloading, but if it needs to be done for periods of a time, there’s no issue with that. The band already cycles, what regulatory “machines” are involved? Stay at the top end, and then the bottom end for a bit. Put the cycle on a schedule instead of having all active at once.

    And no they wouldn’t lose internet, there’s multiple bands and frequencies for that exact reason. If ones congested, it shifts to another less congested and cycles that way.

    Using the light and laser example shift the red light to oranger and get some data, shift it to bluer and get the other. You can also shift the laser a little to get get more data on either side of that.

    Light, like radio has fluctuations you can take advantage of to read between the lines.



  • They could just shift the frequency up and down so they can get data in those ranges. There’s advantages to them being linked together and being able to communicate with them. They could probably also Shut down those bands completely temporarily so some science can get done.

    I get this is a HUGE issue, but this also isn’t this massive non accountable issue to get some science done. Just makes it harder, these embellished headlines don’t help stuff.


  • Your point? You said it wasn’t feasible to do this in space, yet they already are and have larger sizes than on earth…. Also they’ve been doing this for decades and already decommissioned stuff and you claimed it was never done…? The hell…?

    Your own earth based one you linked has even buddied with them as you so nicely quoted… so which is it? It’s not possible? Never been done? And you were making fun of MY comment? Lmfao.








  • Earth has natural interference, the moon, any other satellite…. so yeah they should have always been above the natural interference, they’ve always just “accounted” for it, but who knows how accurate that is. Obviously avoiding the interference is the better option. Any satellite also provides interference, it’s not star link is the only ones here… you don’t think that do you?

    They’ve avoided spending the money putting radio in space, for what reason who knows, but there’s always interference here on earth, it’s odd you claim otherwise. There is actually radio astronomy in space, they point it towards earth instead, so take from this what you will, but it’s better away from interference than passing through it.






  • No, but it seems like you’re assuming they would look at this sandboxed by itself…? Of course there is more than one data point to look at, when you uploaded the image would noted, so even if you uploaded an image with older exif data, so what? The original poster would still have the original image, and the original image would have scraped and documented when it was hosted. So you host the image with fake data later, and it compares the two and sees that your fake one was posted 6 months later, it gets flagged like it should. And the original owner can claim authenticity.

    Metadata provides a trail and can be used with other data points to show authenticity when a bad actor appears for your image.

    You are apparently assuming to be looking at a single images exif data to determine what? Obviously they would use every image that looks similar or matches identical and use exif data to find the real one. As well as other mentioned methods.

    The only vector point is newly created images that haven’t been digitally signed, anything digitally signed can be verified as new, unless you go to extreme lengths to fake and image and than somehow recapture it with a digitally signed camera without it being detected fake by other methods….




  • ….

    https://arstechnica.com/information-technology/2024/09/google-seeks-authenticity-in-the-age-of-ai-with-new-content-labeling-system/

    Its literally the method that’s used…

    group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they’ve been modifie

    For 5 fucking years already….

    Okay, what does an image metadata and advertising have to do with each other…? I’m not here for conspiracy theories, I’m here to have a discussion, which you clearly can’t do.

    You claim I don’t know much… I stated as much… yet you don’t know how images are verified …? The fuck…? Go off on whatever tangent you want, but exit data is the only way to determine if a photo is legitimate… yes it can be faked… congrats for pointing that out and only that this entire time… even though I already mentioned that…

    What’s your point dude? Seriously I’m blocking you if you can’t have a discussion. Proof of ownership and detecting fakes are two mutually inclusive things, they can both be used to help the others legitimacy, why are you only looking at this from one angle here? Exif is for ownership, the methods in the comment I responded to are for other things. I mentioned THIS previously as well….