Save The Internet Call Congress now: 202-759-7766 

This is an Internet emergency. Less than 48 hours left until the vote to kill #NetNeutrality. #BreakTheInternet to demand that Congress #StopTheFCC. Take action: Call Congress now: 202-759-7766

Black Friday is the busiest day of the year for plumbers — but not for the reason you think

  • Black Friday is the busiest day of the year for plumbers, who receive up to 50% more calls than they do on a normal Friday.
  • Clogged sinks are the most common problem, since people try to improperly dispose of food scraps.

Plumbers like buying discount TVs as much as anyone, but don’t expect them to join you in much Black Friday shopping.

For plumbers, the day after Thanksgiving is “Brown Friday.”

Roto-Rooter, the largest supplier of plumbing, sewer, and drain services, says Black Friday is busiest day of the year for plumbers. Calls for services increase by up to 50% compared to a normal Friday, and up to 27% over a regular Friday-to-Sunday period.

But don’t blame the human waste that follows a hearty Thanksgiving meal.

“It’s not even close,” Paul Abrams, director of public relations for Roto-Rooter, told ABC. “The number one reason for calls is kitchen sink drains and garbage disposals.”

When in the throes of cooking a Thanksgiving feast, it may seem okay to wash wads of potato skins, bits of turkey, or oily drippings down the drain. But Roto-Rooter advises people to throw all solids and oils away, and never to use the toilet to dispose of scraps that don’t fit down the sink.

“This time of year, homes have extra occupants in the form of holiday guests who are taking extra showers and flushing more toilets,” Abrams said in a statement. “That alone puts additional stress on many residential drain systems.”

Adding in other bulky objects only raises the chance that your Black Friday will turn brown.

Facebook rolls out AI to detect suicidal posts before they’re reported

This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.

Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

“This is about shaving off minutes at every single step of the process, especially in Facebook Live,” says VP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial aspects about the technology, but it’s another space where we have little choice but to hope Facebook doesn’t go too far.

[Update: Facebook’s chief security officer Alex Stamos responded to these concerns with a heartening tweet signaling that Facebook does take seriously responsible use of AI.

The creepy/scary/malicious use of AI will be a risk forever, which is why it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in. Also, Guy Rosen and team are amazing, great opportunity for ML engs to have impact.

Facebook CEO Mark Zuckerberg praised the product update in a post today, writing that “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”

Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.]

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?”

“We’ve talked to mental health experts, and one of the best ways to help prevent suicide is for people in need to hear from friends or family that care about them,” Rosen says. “This puts Facebook in a really unique position. We can help connect people who are in distress connect to friends and to organizations that can help them.”

How suicide reporting works on Facebook now

Through the combination of AI, human moderators and crowdsourced reports, Facebook could try to prevent tragedies like when a father killed himself on Facebook Live last month. Live broadcasts in particular have the power to wrongly glorify suicide, hence the necessary new precautions, and also to affect a large audience, as everyone sees the content simultaneously unlike recorded Facebook videos that can be flagged and brought down before they’re viewed by many people.

Now, if someone is expressing thoughts of suicide in any type of Facebook post, Facebook’s AI will both proactively detect it and flag it to prevention-trained human moderators, and make reporting options for viewers more accessible.

When a report comes in, Facebook’s tech can highlight the part of the post or video that matches suicide-risk patterns or that’s receiving concerned comments. That avoids moderators having to skim through a whole video themselves. AI prioritizes users reports as more urgent than other types of content-policy violations, like depicting violence or nudity. Facebook says that these accelerated reports get escalated to local authorities twice as fast as unaccelerated reports.

Mark Zuckerberg gets teary-eyed discussing inequality during his Harvard commencement speech in May

Facebook’s tools then bring up local language resources from its partners, including telephone hotlines for suicide prevention and nearby authorities. The moderator can then contact the responders and try to send them to the at-risk user’s location, surface the mental health resources to the at-risk user themselves or send them to friends who can talk to the user. “One of our goals is to ensure that our team can respond worldwide in any language we support,” says Rosen.

Back in February, Facebook CEO Mark Zuckerberg wrote that “There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner . . .  Artificial intelligence can help provide a better approach.”

With more than 2 billion users, it’s good to see Facebook stepping up here. Not only has Facebook created a way for users to get in touch with and care for each other. It’s also unfortunately created an unmediated real-time distribution channel in Facebook Live that can appeal to people who want an audience for violence they inflict on themselves or others.

Creating a ubiquitous global communication utility comes with responsibilities beyond those of most tech companies, which Facebook seems to be coming to terms with.


AT&T wants you to forget that it blocked FaceTime over cellular in 2012

AT&T’s push to end net neutrality rules continued yesterday in a blog post that says the company has never blocked third-party applications and that it won’t do so even after the rules are gone.

Just one problem: the blog post fails to mention that AT&T blocked Apple’s FaceTime video chat application on iPhones in 2012 and 2013. Policy Director Matt Wood of advocacy group Free Press pointed out the omission in a tweet:

I guess you can credit Bob Quinn & @ATTPublicPolicy for having the guts to lie so confidently. But when it says @freepress‘s 2010  predictions about mobile blocking were wrong, AT&T conveniently omits blocking FaceTime on cellular in 2012. 

In AT&T’s new blog post, Senior Executive VP Bob Quinn refers back to a prediction Free Press made in 2010 when the first version of the Federal Communications Commission’s net neutrality rules were adopted.

“The rules pave the way for AT&T to block your access to third-party applications and to require you to use its own preferred applications,” Free Press said at the time. The 2010 rules imposed fewer restrictions on mobile carriers than on home Internet service providers, which is what raised concerns for Free Press back then.

Quinn says now that Free Press’s prediction was totally wrong.

“Of course, none of those predictions ever came true then and they won’t come true after the FCC acts here either,” Quinn wrote yesterday.

But in fact AT&T did block FaceTime on its cellular network when users tried to access the application from certain data plans, such as unlimited data packages. Apple made FaceTime work over cellular networks in 2012 with the release of iOS 6, but AT&T said it would only “enable” FaceTime on cellular if you bought a “Mobile Share data plan.”

Switching to Mobile Share required unlimited data customers to give up that unlimited perk. Even AT&T customers with limited data plans couldn’t access FaceTime on cellular if they weren’t paying for one of the then-new shared plans. If you didn’t have the right data plan, you had to use Wi-Fi for FaceTime.

AT&T said it was reasonable network management

The 2010 net neutrality rules prohibited mobile broadband providers from “block[ing] applications that compete with their voice or video telephony services.” The rule applied except when blocking an application could be justified as “reasonable network management.”

Free Press and other groups in September 2012 accused AT&T of violating the no-blocking rule, saying that the reasonable network management exception shouldn’t apply. “There is no technical reason why one data plan should be able to access FaceTime, and another not,” Public Knowledge Senior Staff Attorney John Bergmayer said at the time.

Of course, AT&T disagreed. There is no “blocking issue” because FaceTime is pre-loaded on iPhones, Quinn wrote in August 2012:

The FCC’s net neutrality rules do not regulate the availability to customers of applications that are pre-loaded on phones. Indeed, the rules do not require that providers make available any pre-loaded apps. Rather, they address whether customers are able to download apps that compete with our voice or video telephony services. AT&T does not restrict customers from downloading any such lawful applications, and there are several video chat apps available in the various app stores serving particular operating systems.

Free Press countered that “AT&T is inventing words that are not in the FCC’s rules in a weak attempt to justify its blocking of FaceTime.” The word “pre-loaded” did not appear in the FCC’s 2010 net neutrality order.

AT&T also applied restrictions to Google Hangouts on Android phones.

AT&T eased the restrictions on limited data plans a few months after the initial policy, but unlimited data customers were still blocked from using FaceTime. AT&T finally lifted all these limitations on pre-loaded video apps by the end of 2013.

Quinn justified AT&T’s decision to block FaceTime again in November 2012.

“[W]ith the FaceTime app already pre-loaded on tens of millions of AT&T customers’ iPhones, there was no way for our engineers to effectively model usage, and thus to assess network impact,” he wrote then.

Who do you trust?

But in this week’s AT&T blog post claiming that AT&T never blocked third-party applications, the FaceTime blocking did not even warrant a mention. We asked an AT&T spokesperson why this wasn’t included, but the company simply pointed us back to its statements from 2012.

AT&T’s blocking of FaceTime was cited by the Federal Communications Commission in its 2015 decision to impose stronger rules on both home and mobile broadband providers. The current rules prohibit home and mobile ISPs from blocking or throttling any lawful Internet content, subject to reasonable network management.

Those rules are about to be thrown out, as the FCC’s Republican majority has scheduled a December 14 vote to get rid of the net neutrality rules. While Free Press contends that Internet users can’t trust ISPs to play fair without the rules, Quinn says that isn’t true.

“AT&T intends to operate its network the same way AT&T operates its network today: in an open and transparent manner,” he wrote yesterday. “We will not block websites, we will not throttle or degrade internet traffic based on content, and we will not unfairly discriminate in our treatment of Internet traffic.”

In short, “there will be no change in how your Internet works after the order is adopted,” Quinn wrote.

JON BRODKIN Jon is Ars Technica’s senior IT reporter, covering the FCC and broadband, telecommunications, wireless technology, and more