Technology

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

SkyNTP , in Facebook turns over mother and daughter’s chat history to police resulting in abortion charges

Just yesterday here on Lemmy, I mentioned the dangers of violating privacy, and some commenters went on about “what dangers?” Implying there were none…

Is it not enough to gesture broadly?

Cosmonauticus ,

No one has anything to hide, until they do

waspentalive ,
@waspentalive@lemmy.one avatar

I once heard that "Anyone can be charged with a crime if they can be watched closely enough for long enough."

catch22 , in Reddit is licensing its content to Google to help train its AI models
@catch22@programming.dev avatar

Wheres my cut?

grayman ,

You're the product, not a customer.

ryegye24 , in New Laptop Brand Shocked Whole Computer Industry - Framework Laptop - Teardown And Repair Assessment

If they come out with one that doesn't use the goofy screen ratio then it's an auto-buy for my next laptop. Even if they don't it's way up on my short list.

falkerie71 ,
@falkerie71@sh.itjust.works avatar

TBH, I don't think I'll ever consider a laptop without a 16:10 or 3:2 screen now. Having that extra real estate is so good, and usually comes with a larger trackpad too.

Celivalg ,

I personally love the screen ratio

x86x87 , in Reddit is licensing its content to Google to help train its AI models

is it Reddit's content though?

SteveKLord OP ,
@SteveKLord@slrpnk.net avatar

It's content that Reddit users generated which apparently is theirs to sell.

jarfil ,
@jarfil@beehaw.org avatar

From the TOS/EULA, the content belongs to each user, they just license it to Reddit to use as it pleases.

SteveKLord OP ,
@SteveKLord@slrpnk.net avatar

So it’s user generated content that is a product for Reddit to sell, like most big tech companies do, as I said.

jarfil ,
@jarfil@beehaw.org avatar

The difference is: Reddit doesn't own the content, they can't stop anyone else from selling it, or giving it for free; only the users could (the actual owners).

There are Reddit content dumps out there, which Reddit can't stop anyone from using... so not sure what they are selling, but if it's just that, then they're scamming people.

SteveKLord OP ,
@SteveKLord@slrpnk.net avatar

If you are posting on walled-garden big tech site like Reddit, Instagram, Twitter / X, the site and therefore the company certainly owns your content and all the metadata attributed to it. You're the product. This is why most of us are here on the Fediverse where things are different. Maybe if it's your personal photo you took than you can make a copyright claim to some degree and download your data tediously but once it's on their network it's generally theirs to do as they please, whether that be sell to Google or any other advertiser or use on in-house advertising. Often without proper informed consent and not always legally. It's definitely a scam, I agree. Hopefully this exposes it more and brings more people to places on the Fediverse where there's no owner/seller/buyer of your data or anything else you contributed.

jarfil ,
@jarfil@beehaw.org avatar

Ownership comes with both rights and responsibilities.

Platforms want as many of the rights as possible, without the responsibilities... which is why they have a contract (TOS) where they explicitly renounce to ownership, leaving it for the user, and only license the rights.

If platforms took full ownership, like in a "work for hire" agreement, they would be responsible for any illegal content a user could upload, since it wouldn't be the user's content anymore. Obviously they don't want that.

A side effect of wanting as much content as possible without owning it, is that... well, they don't own it. 😎

Fediverse where there's no owner/seller/buyer of your data or anything else you contributed.

Incorrect. You get ownership of anything that's yours, then upload stuff under whatever TOS your instance has... what's that? it has no TOS? Then they're in for a rough awakening some day. 🤷

Whether there are sellers/buyers... is something we'll learn in time. For now, user generated content on the Fediverse gets shared with little regard or protection of anyone's rights, so anyone can make a compilation, bundle it up, slap a price tag on it, and try to sell it.

Tristaniopsis , in China invents most powerful detonation engine for hypersonic flight

Fuck the CCP with a bucket of rusty nails.

viking ,
@viking@infosec.pub avatar

You need to draw the line between a country's government and some of its brightest minds. This is a scientific breakthrough, and should be celebrated as such. Period.

Tristaniopsis ,

No. Sorry, but if the government is horrific then any scientific advancements will be used in unethical ways. It’s unfortunate but true.

The CCP is brutally evil and that’s that.

viking ,
@viking@infosec.pub avatar

The research has been published. Everybody in the world is free to replicate it. The CCP is no more or less capable of doing so, just because it happened on their turf.

Kbin_space_program ,

It has been published, but a staggering number of scientific "papers" coming out of China are turning out to be complete fabrications.

viking ,
@viking@infosec.pub avatar

Correct, so it should be independently reviewed and recreated to validate the claims. Crying wolf just because there were issues before wouldn't be fair.

NoIWontPickaName ,

So you don’t base any of your decisions on past experiences?

viking ,
@viking@infosec.pub avatar

Not when it comes to scientific research.

If the same person or someone acting under the same supervisor in the same faculty published some more amazing sounding research after previous ones had been debunked, I'd be sceptical.

But in a country with 1.4bn people and more than 3.000 universities? There's gotta be some bad eggs, but you can't discredit every single one due to the actions of a few.

I've met brilliant scientists in and from China.

Kbin_space_program ,

The trick is that they're literally spamming papers at this point.

Zerush OP ,
@Zerush@lemmy.ml avatar

Most of these inventions will be used by military first, in China as also in the USA, which isn't an atom better in this and other aspect. It's precisely the USA which use more investment in weapons and defense as any other country and not precisely for humanity reasons.

Tristaniopsis ,

The US is committed to free speech, the CCP is not.

zout ,

LOL

NoIWontPickaName ,

Are we? The current wave of book bans would say different.

It must be those two gay penguins’ fault

faintwhenfree ,

I trust no articles from SCMP.

viking ,
@viking@infosec.pub avatar

That's a different story, but you can simply look up the research paper.

nyan , in Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

Bunch of things going on here.

On the one hand, Snapchat shouldn't be liable for users' actions.

On the other hand, Snapchat absolutely should be liable for its recommendation algorithms' actions.

On the third hand, the kid presumably lied to Snapchat in order to get an account in the first place.

On the fourth hand, the kid's parents fail at basic parenting in ways that have nothing to do with Snapchat: "If you get messages on-line that make you uncomfortable or are obviously wrong, show them to a trusted adult—it doesn't have to be us." "If you must meet someone you know on-line in person, do it in the most public place you can think of—mall food courts during lunch hour are good. You want to make sure that if you scream, lots of people will hear it." "Don't ever get into a car alone with someone you don't know very well."

Solution: make suggestion algorithms opt-in only (if they're useful, people will opt in). Don't allow known underage individuals to opt in—restrict them to a human-curated "general feed" that's the same for everyone not opted in if you feel the need to fill in the space in the interface. Get C.O. better parents.

None of that will happen, of course.

makeasnek , (edited )
@makeasnek@lemmy.ml avatar

On the other hand, Snapchat absolutely should be liable for its recommendation algorithms’ actions.

Should they though? The algorithm can be as simple as "show me the user with the most liked posts". Even the best design algorithm is going to make suggestions that users connect with sex offenders because the algorithm has no idea who is a sex offender. Unless snapchat has received an abuse report of some kind of actively monitors all accounts all the time, they have no way to know this user is dangerous. Even if they did monitor the accounts, they won't know the user is dangerous until they do something dangerous. Even if they are doing something dangerous, it may not be obvious from their messages and photos that they are doing something dangerous. An online predator asking a 12 year old to meet them somewhere looks an awful lot like a family member asking the same thing assuming there's not something sexually suggestive in the message. And requiring that level of monitoring is extremely expensive and invasive. It means only big companies with teams of lawyers can run online social media services. You can say goodbye to fediverse in that case, along with any expectation of privacy you or anybody else can have online. And then, well, hello turnkey fascism to the next politician who gets in power and wants to stifle dissent.

Kids being hurt is bad. We should work to build a society where it happens less often. We shouldn't sacrifice free, private speech in exchange or relegate speech only to the biggest, most corporate, most surveilled platforms. Because kids will still get hurt, and we'll just be here with that many fewer liberties. Let's not forget that the US federal government has a list of known child sex offenders in the form of Epstein's client list and yet none of them are in prison. I don't believe that giving the government more control and surveillance over online speech is going to somehow solve this problem. In fact, it will make it harder to hold those rich, well-connected, child rapist fucks accountable because it will make dissent more dangerous to engage in.

nyan ,

Yes, they should. They chose to deploy the algorithm rather than using a different algorithm, or a human-curated suggestion set, or nothing at all. It's like a store offering one-per-purchase free bonus items while knowing a few of them are soaked in a contact poison that will make anyone who touches them sick. If your business uses a black box to serve clients, you are liable for the output of that black box, and if you can't find a black box that doesn't produce noxious output, then either don't use one or put a human in the loop. Yes, that human will cost you money. That's why my suggestion at the end was to use a single common feed, to reduce the labour. If they can't get enough engagement from a single common feed to support the business, maybe the business should be allowed to die.

The only leg Snapchat has to stand on here is the fact that "C.O." was violating their TOS by opening an account when she was under the age of 13, and may well have claimed she was over 18 when she was setting up the account.

Pika ,

I'm failing to see how it's snapchats problem, it can't know that the person was nefarious, and it's not reasonable to expect that it should have been able to know. This is like saying that Disney should be held responsible because someone decided to go on a killing spree while using the recommended costume of the week. It's two isolated events that happens to coencide with eachother.

this is a failure on the parents side all the way down, from the lack of supervision to the allowance of making a social media account below the legal age to do so.

nyan ,

Snapchat is not the only problem here, but it is a problem.

If they can't guarantee their recommendations are clean, they shouldn't be offering recommendations. Even to adults. Let people find other accounts to connect to for themselves, or by consulting some third party's curated list.

If not offering recommendations destroys Snapchat's business model, so be it. The world will continue on without them.

It really is that simple.

Using buggy code (because all nontrivial code is buggy) to offer recommendations only happens because these companies are cheap and lazy. They need to be forced to take responsibility where it's appropriate. This does not mean that they should be liable for the identity of posters on their network or the content of individual posts—I agree that expecting them to control that is unrealistic—but all curation algorithms are created by them and are completely under their control. They can provide simple sorts based on data visible to all users, or leave things to spread externally by word of mouth. Anything beyond that should require human verification, because black box algorithms demonstrably do not make good choices.

It's the same thing as the recent Air Canada chatbot case: the company is responsible for errors made by its software, to about the same extent as it is responsible for errors made by its employees. If a human working for Snapchat had directed "C.O." to the paedophile's account, would you consider Snapchat to be liable (for hiring the kind of person who would do that, if nothing else)?

Pika ,

No i would not, unless it was proven that said employee knew the person was an S.O and knew that the account was a minor (but at that point the employee should have disabled the account per Snapchats policy regardless). If that data was not available to them, then they wouldn't have the capability to know so I would concider it not at fault.

nyan ,

Then, in my opinion, you would have failed to perform due diligence. Even if you'd thought C.O. was an adult, suggesting a woman strike up a private conversation with a man neither of you know is always something that deserves a second look (dating sites excepted), because the potential for harm is regrettably high.

MinorLaceration , in Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

Parents seriously need to be more aware of what kids are doing on their phones. Why the hell is a 12 year old on snapchat to begin with?

Correct me if I'm wrong, but I doubt snapchat requires photo ID in order to make an account. Besides requiring ID, it's not clear to me how else snapchat would be able to know that she is a minor and that the perpetrator is a sex offender.

zout ,

I take it that you're not a parent. So, if Snapchat is not able to know that someone is a sex offender, how is a parent to know? All you can do as a parent is talk to your kids about the dangers of these kind of apps. Sure, you could, and maybe should, forbid usage of these apps, but at the end of the day you're not looking over your kids shoulders 24/7.

Heratiki ,

We had to walk a delicate line with our kids (2005-2017) when it came to interaction online. Never wanted them to feel like we were keeping them from experiencing the knowledge or social interaction the internet provided. But we also kept close tabs and paid special attention to specific behaviors. So if they were in their netbooks we’d make it a habit to walk behind them not to look but just to see what their reaction would be. Kids mostly know right from wrong and when they feel it “might” be wrong they try and hide it from their parents. If you pay attention you’ll see them “hiding” and that’s a sign to dig deeper. This way they maintain their privacy and any issues can be brought to light with them directly.

(Understand that the following will have specific details changed just for anonymity’s sake) Grooveshark was the first interaction we saw was troublesome. So we sat our daughter down and asked her direct why she was trying to hide her netbook from us seeing it and what had she been doing she felt she needed to hide? The alternative was to relinquish the netbook until she told us. Come to find out a friend of hers from school (female 2 years older) was trying to slowly convince her to lie to her parents and sneak off with her. Our daughter told us this because it scared her not because she would lose her access. We also stayed open and active with our kids indulging in the same things they were interested in (Minecraft, Guitar Hero, etc) regardless if it was explicitly something we enjoyed. So she didn’t lose access to Grooveshark because she really loved listening to music. We just kept an eye on it and she removed her friend from communication. We explained what she was likely attempting and her friend admitted to it. They’re not friends now but it never happened again.

Don’t get me wrong, we made tons of bad calls before we learned what worked. But the key to all of it is paying attention. Not hovering over them and stopping them from making mistakes. But watching the nuance of their interactions with everyone around them. If they start to get secretive then there is usually a reason. And it’s best to just talk to them about it. And if one conversation doesn’t do it then have multiple conversations. Listen to what they have to say and why they were being secretive. Works best when they’re not expecting it too (like in the middle of playing Minecraft together). Anyway that’s just IMHO.

RmDebArc_5 ,
@RmDebArc_5@lemmy.ml avatar

Isn’t stopping this exact thing their reasoning to why they can read all your messages?

slacktoid ,
@slacktoid@lemmy.ml avatar

They are too busy looking at titties or cocks

cupcakezealot ,
@cupcakezealot@lemmy.blahaj.zone avatar

putting this all on the parents is a failure to understand how parenting works. parents who are the most attentive still aren't around their kids all the time and children will always find a way to do something. why was snapchat connecting adults to children through algorithms in the first place?

MinorLaceration ,

Something tells me you didn't read my comment.

Unless we want snapchat and other apps to require photo ID, how would snapchat actually know who is a child and who is an adult? Why did the parents not know or care that the kid had snapchat downloaded?

jaybone , in China invents most powerful detonation engine for hypersonic flight

Oh it’s Lemmy.ml

TexMexBazooka ,

Yup

master5o1 , in China invents most powerful detonation engine for hypersonic flight

I imagine that theoretical speed could only be used for drone planes.

tsonfeir ,
@tsonfeir@lemm.ee avatar

You could ride on it, but you’d need a cowboy hat.

kubica ,
@kubica@kbin.social avatar

If a space suit is needed, would the hat go inside or outside the suit?

tsonfeir ,
@tsonfeir@lemm.ee avatar

I imagine you would need a cowboy hat that would also supply oxygen.

gaael ,

Outside, why wear a hat if no one can see it ?

Lath ,

Aerodynamics.

pelya ,

That's why cowboy hats have folded brims.

Ummdustry ,

it pokes through like a space marines pony tail

heluecht ,
@heluecht@pirati.ca avatar

@Zerush @master5o1 Speed is not a problem. Acceleration is.

tvbusy , in Reddit is licensing its content to Google to help train its AI models

Spez: we want to sell our users' content to anyone with a good price.

Staff: but our users will rage and delete their content.

Spez: not if we remove the API.

darkpanda ,

How do you know that deleting anything on Reddit actually deletes anything? It might just hide the content but soft delete it in the database, which means you may not be able to see it anymore but they can still use it for whatever.

autotldr Bot , in Stability announces Stable Diffusion 3, a next-gen AI image generator

This is the best summary I could come up with:


Stability says that its Stable Diffusion 3 family of models (which takes text descriptions called "prompts" and turns them into matching images) range in size from 800 million to 8 billion parameters.

The size range accommodates allowing different versions of the model to run locally on a variety of devices—from smartphones to servers.

Stability has made a name for itself as providing a more open alternative to proprietary image-synthesis models like OpenAI's DALL-E 3, though not without controversy due to the use of copyrighted training data, bias, and the potential for abuse.

We do not have access to Stable Diffusion 3 (SD3), but from samples we found posted on Stability's website and associated social media accounts, the generations appear roughly comparable to other state-of-the-art image-synthesis models at the moment, including the aforementioned DALL-E 3, Adobe Firefly, Imagine with Meta AI, Midjourney, and Google Imagen.

While Stable Diffusion 3 isn't widely available, Stability says that once testing is complete, its weights will be free to download and run locally.

"This preview phase, as with previous models," Stability writes, "is crucial for gathering insights to improve its performance and safety ahead of an open release."


The original article contains 614 words, the summary contains 193 words. Saved 69%. I'm a bot and I'm open source!

huginn , in China invents most powerful detonation engine for hypersonic flight

This article is bullshit: they're reporting that a blueprint was made. No demonstration, no proof of it working.

Rotary detonation isn't a new concept and GE has demonstrated a rotary detonation scramjet for 0 - Mach 5.

https://www.geaerospace.com/press-release/other-news-information/ge-aerospace-demonstrates-hypersonic-dual-mode-ramjet-rotating

If you really think that 0 to Mach 16 is within reach with current technology RDEs I have a bridge to sell you.

Darpa has all the money of God and they've been throwing billions at the exact problem of rotary detonation. Everybody knows that detonation is more fuel efficient than deflagration and that turbine engines are holding jet aircraft back from hypersonic flight.

When they make a Mach 16 demonstrator I'll believe them. Until then I'm still very excited for the demonstrated and actual capabilities of RDE Scramjets.

Norgur , in China invents most powerful detonation engine for hypersonic flight

Yeah, no. Just no. Absolutely no. Faster than everything else.and higher than everything else and all of that by burning less energy? Come on people, stop falling for this kind of crap.

electricprism , (edited )
pastermil , in China invents most powerful detonation engine for hypersonic flight

That's nice...

...when I see it in production.

Rosco , (edited ) in China invents most powerful detonation engine for hypersonic flight

I'll wait until other countries validate the findings, or until someone makes a working prototype, before getting excited. Seems like it's one of those "too good to be true" stories like the LK-99. Hype in tech and science almost always leads to disappointment.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • marketreserach
  • technology@lemmy.ml
  • tech
  • updates
  • testing
  • drbboard
  • programming
  • til
  • wanderlust
  • bitcoincash
  • Sacramento
  • All magazines