Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 25 2017

5738 0984
Reposted fromlokrund2015 lokrund2015 viatimecode timecode
7757 1ddd 500
Reposted fromfungi fungi viaFreXxX FreXxX

May 24 2017

… I do think it’s a pity that the word [friendzone] went this way. In its apparent first appearance, in (appropriately enough!) Friends, Joey diagnoses Ross as in Rachel’s “friendzone” — but neither of them cast this as Rachel’s fault or a choice she has made. Joey explicitly says “she has no idea what you’re thinking.” The problem is not her, it’s that Ross has waited too long to declare his feelings and now Rachel will 1) find it hard to shift the way she thinks about him and 2) will be reluctant to jeopardise the friendship.It’s a noun, not a verb, and it’s a place you wander into, not a place you are put. I wish it had stayed that way.
Phospherocity (via lowoncliches)
Reposted fromlordminx lordminx

May 23 2017

Reposted fromgruetze gruetze viasvvat svvat
9158 b021
Reposted frommieczuu mieczuu viaKryptonite Kryptonite
2698 495e 500

maptitude1:

The evolution of same-sex rights in Europe, 1989-2017

Reposted fromgruetze gruetze viaKryptonite Kryptonite

May 22 2017

May 17 2017

Making AI work for everyone

I’ve now been at Google for 13 years, and it’s remarkable how the company’s founding mission of making information universally accessible and useful is as relevant today as it was when I joined. From the start, we’ve looked to solve complex problems using deep computer science and insights, even as the technology around us forces dramatic change.

The most complex problems tend to be ones that affect people’s daily lives, and it’s exciting to see how many people have made Google a part of their day—we’ve just passed 2 billion monthly active Android devices; YouTube has not only 1 billion users but also 1 billion hours of watchtime every day; people find their way along 1 billion kilometers across the planet using Google Maps each day. This growth would have been unthinkable without computing’s shift to mobile, which made us rethink all of our products—reinventing them to reflect new models of interaction like multi-touch screens.

We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. And as before, it is forcing us to reimagine our products for a world that allows a more natural, seamless way of interacting with technology. Think about Google Search: it was built on our ability to understand text in webpages. But now, thanks to advances in deep learning, we’re able to make images, photos and videos useful to people in a way they simply haven’t been before. Your camera can “see”; you can speak to your phone and get answers back—speech and vision are becoming as important to computing as the keyboard or multi-touch screens.  

Google-Lens---restaurants-(animated).gif With Google Lens, your smartphone camera won’t just see what you see, but also understand what you’re looking at and help you take action. modified_Cloud-TPU.gif Cloud TPUs are custom-built for machine learning; 64 of these devices can be networked into a TPU pod, an 11.5 petaflop ML supercomputer. DNA sequencing with machine learning

AI can also help with basic sciences like DNA sequencing. A new tool from Google.ai helps researchers identify genetic variants more quickly.

Abu Abu Qader, a high school student in Chicago, taught himself how to use TensorFlow using YouTube. He’s using ML to improve mammography. Jobs feature in search

We built a new feature with the goal that no matter who you are or what kind of job you're looking for, you can find the job postings that are right for you.

The Assistant is a powerful example of these advances at work. It’s already across 100 million devices, and getting more useful every day. We can now distinguish between different voices in Google Home, making it possible for people to have a more personalized experience when they interact with the device. We are now also in a position to make the smartphone camera a tool to get things done. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. If you have crawled down on a friend’s apartment floor to see a long, complicated Wi-Fi password on the back of a router, your phone can now recognize the password, see that you’re trying to log into a Wi-Fi network and automatically log you in. The key thing is, you don’t need to learn anything new to make this work—the interface and the experience can be much more intuitive than, for example, copying and pasting across apps on a smartphone. We’ll first be bringing Google Lens capabilities to the Assistant and Google Photos and you can expect it to make its way to other products as well.

[Warning, geeky stuff ahead!!!]

All of this requires the right computational architecture. Last year at I/O, we announced the first generation of our TPUs, which allow us to run our machine learning algorithms faster and more efficiently. Today we announced our next generation of TPUs—Cloud TPUs, which are optimized for both inference and training and can process a LOT of information. We’ll be bringing Cloud TPUs to the Google Compute Engine so that companies and developers can take advantage of it.

It’s important to us to make these advances work better for everyone—not just for the users of Google products. We believe huge breakthroughs in complex social problems will be possible if scientists and engineers can have better, more powerful computing tools and research at their fingertips. But today, there are too many barriers to making this happen. 

That’s the motivation behind Google.ai, which pulls all our AI initiatives into one effort that can lower these barriers and accelerate how researchers, developers and companies work in this field.

One way we hope to make AI more accessible is by simplifying the creation of machine learning models called neural networks. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. That’s why we’ve created an approach called AutoML, showing that it’s possible for neural nets to design neural nets. We hope AutoML will take an ability that a few PhDs have today and will make it possible in three to five years for hundreds of thousands of developers to design new neural nets for their particular needs. 

In addition, Google.ai has been teaming Google researchers with scientists and developers to tackle problems across a range of disciplines, with promising results. We’ve used ML to improve the algorithm that detects the spread of breast cancer to adjacent lymph nodes. We've also seen AI make strides in the time and accuracy with which researchers can guess the properties of molecules and even sequence the human genome.

This shift isn’t just about building futuristic devices or conducting cutting-edge research. We also think it can help millions of people today by democratizing access to information and surfacing new opportunities. For example, almost half of U.S. employers say they still have issues filling open positions. Meanwhile, job seekers often don’t know there’s a job opening just around the corner from them, because the nature of job posts—high turnover, low traffic, inconsistency in job titles—have made them hard for search engines to classify. Through a new initiative, Google for Jobs, we hope to connect companies with potential employees, and help job seekers find new opportunities. As part of this effort, we will be launching a new feature in Search in the coming weeks that helps people look for jobs across experience and wage levels—including jobs that have traditionally been much harder to search for and classify, like service and retail jobs. 

It’s inspiring to see how AI is starting to bear fruit that people can actually taste. There is still a long way to go before we are truly an AI-first world, but the more we can work to democratize access to the technology—both in terms of the tools people can use and the way we apply it—the sooner everyone will benefit. 

To read more about the many, many other announcements at Google I/O—for Android, and Photos, and VR, and more, please see our latest stories.

Reposted fromSigalonit Sigalonit
Reposted fromFlau Flau vianerdanel nerdanel
Reposted fromkjn kjn viamuccia muccia
2782 8375 500
Parts of Berlin - Berlin in Einzelheiten
Reposted fromsommteck sommteck vianomnomnom nomnomnom
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl