Artificial Intelligence and Machine Learning Blogs
Explore AI and ML blogs. Discover use cases, advancements, and the transformative potential of AI for businesses. Stay informed of trends and applications.
cancel
Showing results for 
Search instead for 
Did you mean: 


Too long to read

  • Having a common terminology (in general) is key for understanding and developing

  • Europe is still in the race for AI with a lot of high-quality (basic) research

  • Use machine-learning to shed light and support humans in searching - instead of directly trying to solve a problem end-to-end

  • Privacy is important and new ways to collaborate together will ensuring privacy emerge, such as Federated Learning which uses a global model and local models as mentioned by Google at its I/O conference this year

  • Self-supervised learning is a better term than unsupervised learning for some machine learning tasks

  • By 2022, 75m jobs will be removed but 133m new jobs are created related to AI; economy will grow by ~14 trillion USD by 2030






I cannot believe that already more than 1.5 years passed when I had the pleasure to meet Fabian in person in a small Café in Berlin to discuss the possibility of giving a talk at his Rise of AI conference (you can read about it here). Now, one year passed since that conference and I am glad that I got a ticket to attend the 5th conference as well - thanks a lot Veronika and Fabian! So, on May 16th 2019, 800 people from around 25 different nationalities came together to give presentations, to listen to them, to learn from each other, and to network. The event's location was the same as last year, the Telekom building at Gendarmenmarkt. This year, however, there was one additional stage and a workshop room with some deep dive sessions.



The event's schedule looked very promising and I took something away from each of them. In the following, I give a short summary of the sessions I attended; based on my notes and what I was able to remember. As a heads up and for deeper looks, you can find the slides of the conference here and more pictures here.

AI in healthcare – From personalised medicine to virtual doctors by Iris Hulzink


The day started for me with panel discussions and everybody could just decide which to attend. I was on my way to a discussion about using AI in Enterprises when I stopped by the healthcare discussion where I stayed out of pure interest. Iris Hulzink led the discussion in an open and professional way and we discussed several topics. One discussion point of ours was what the impact of the big tech companies on health care is and who should own the data. Relatedly, Apple claimed in one of their keynotes that their biggest impact on humanity will be their contributions to healthcare.


Apple Privacy Ad at the CES 2019 in Vegas (source)


Following the last scandals around privacy, the companies take this more serious now - at least they claim to do so - as privacy was a major point in Facebook's and Google's main conferences this year. I believe that technology nowadays can heavily improve the healthcare system and make everyones life easier, given that the companies respect your privacy and data. When asked, I provided an example of how a possible scenario of digitalized healthcare could look like. In the past and even today, a doctor has your health data. You go to a doctor, get a treatment and the doctor has a record about you. Now, I could envision that you get the record on your phone, the doctor does not have any data and when you come back you just upload the record to the doctor's system for the session. This way, nobody else has the data except you share it with them - as said, given we live in a secure and private world - and you can switch between doctors as pleased. So, despite I am passionate about the whole healthcare topic, let's continue with the rest of the conference.

Overcoming fundamental problems of (machine) intelligence research by Dagmar Monett


The first talk I listened to was about problems in research presented by Dagmar Monett. The key point I took away was that it is fundamental to have a common language and definition of words to be able to create transparency, to create valid documentations, to understand, and to develop further.
It is important to have a common terminology to create a common understanding

One example the speaker gave was that she wrote her thesis about model configuration; a task that is referred to nowadays with the fancy phrase hyperparameter optimization. We live in a world of overselling, shiny buzzwords, and hype, don't we? I think that it is important to keep track of the evolvement of meanings and words in order to understand and connect information of different points in time.

AI for Enterprise: Challenges and Solutions from the messy real world by Pascal Weinberger


Some talks that were given on one of the side stages were really, really crowded. So, during the next talk, where I arrived a few minutes late, I had to stand at the end of the room in front of a pillar. Luckily, TVs were put at the side of the room so I was able to see the slides anyways. The speaker, Pascal Weinberger, talked about the fact that future algorithms will work with smaller datasets in a distributed and secure way, allowing to go through the machine learning cycle faster.



Since cloud is often the only infrastructure option for machine learning, techniques such as differential privacy can be helpful. He mentioned also the technique of using global models trained on anonymized data that is then fine-tuned locally; a way Google mentioned at their I/O conference this year used by Gboard (watch the keynote passage here about Federated Learning and Gboard) playing into this privacy-first narrative of the big companies I mentioned in the beginning of the blog. Collaborative learning where each participant is training on de-centralized data silos and combining the results then, Weinberger said, goes into a similar direction of respecting privacy while still being able to work together.

AI in China, Europe and USA: a comparison of Technology and Research by Hans Uszkoreit


Hans Uszkoreit, the Scientific Director of the DFKI, picked up the keynote topic from last year: how AI in China, US, and Europe are developing and who is winning the race. He cited Kai-Fu Lee (see his Wikipedia page), a Taiwanese VC and former exec of Microsoft, Apple, and Google, who said that he "... left out Europe because I didn’t think there was a good chance for it to take even a so-called ‘bronze medal’ in this AI competition." (link to the interview). Uszkoreit shed some light on this and dived deeper into different metrics. While it is true that China overtook the US and Europe in the number of published papers, the quality of US and European papers is higher based on by which conferences those papers are accepted and whether or not they win any awards.
Europe does not even have the chance to win the bronze medal in the AI competition

He argued that this might be due to the fact that a lot of people are going for low-hanging fruits to get famous & rich quickly without really bringing the field forward. One reason for Europe not being known for AI that much is that research in Europe happens on a much quieter level with Europe being frontrunner in basic research and relying on the US to make products out of it and commercialize the results. The problem, of course, is that basic research costs a lot of money while product research and commercialization bring money. I guess it was Fabian in his keynote who said that nowadays you have to be loud and (over-)sell what you are doing to attract talent and investors. Though, it is not like we do not have any applied AI happening in Europe, with SAP being a main European actor in bringing AI to enterprises - I want to state here that although I am an SAP employee at the point of writing, Uszkoreit brought this example on his own ;). The exchange of talents, so Uszkoreit, makes it anyway difficult to say that something is pure American, or European, or Chinese as researchers from different nationalities are usually collaborating together on a paper.


Data Science Will Eat Its Children by Georg Wittenburg


The presenter of the "Data science will eat its children" talk, Georg Wittenburg, told us about the motivation behind his company inspirient: We hear about state-of-the-art AI every day, ranging from recognizing handwriting automtically to autonomous driving to full-robotized factories and sensual AI. Yet, when importing a CSV file into your tabular program you are asked whether the separator is a comma or a tab or a semi-colon. So, they trained classifiers to automatically detect the separation, then they trained classifiers to detect the data type in the file, next they tried to figure out what kind of analysis you can perform on it, then automated that process etc.
Gartner sees a potential of data science automization of ~40% by 2020 (source)

In the meantime, they have 226 different classifiers and offer automatic insights about existing data. The important point, he stressed, is that it is not all about automatically figuring out what data is important to visualize or show. Instead, it is important to make it as easy as possible for humans to search and detect it themselves. In other words, their AI takes over the tiring task of visualization, correlation etc. and gives the humans more time to check what really makes sense and what does not. He told us about the so-called Streetlight effect: people search where it is easiest to search. Their machine learning solutions shed light so it is easier to search.



I found this session inspiring for two reasons: Number one, if you face a problem again and again, it is a good indicator that others might have the same problem and that it might be a good base for founding a company. Second, your company does not have to solve a problem end-to-end with full automization. Start small, step-by-step, shedding light to support the task instead of full-filling the task completely is much better than no light at all.

Learning from Data but not Labels by Jakob Uszkoreit


I had the pleasure to listen to Jakob Uszkoreit, son of Hans Uszkoreit whom I already covered in this post, who leads the Google Brain division in Berlin. His talk was about self-supervised learning. During this talk he mentioned Yann LeCunn and his understanding of self-supervised learning (see LeCunn's Facebook post); if I remember correctly, he said that he understands self-supervised and unsupervised learning slightly differently than LeCunn. Uszkoreit started by stating that in some cases it is difficult to use reinforcement learning. For example, in the AlphaGo case where the AI beat Lee Sedol, the program could teach itself by running the game again and again. In health care, however, creating such a simulation is really difficult and reinforcement learning on real patients - well, it could come to a good result but you don't want to be one of the first X patients, right? So, instead of learning from a simulated environment, he gave an example about learning from observation like a child learns how to bind shoes by watching the parents.



As I understood it, in the reinforcement learning world the child binds the shoes until it succeeded, trying again and again and learning by receiving feedback signals, for example by falling on the ground when walking with open shoes. The presentation mainly evolved around examples of colorizing videos based on a single frame for which I found this blog providing the examples he gave. In the Q&A session, he gave a few interesting insights, like that BERT (which apparently falls into the category of self-supervised) is already used inside Google’s tech stack and that it observable increased the results of a few tasks, for example figuring out whether two questions are similar to each other.

What is next big thing in AI technology? by Christian Guttmann


Christian Guttmann's session was a little bit more high-level - he told us that he was asked to do it in TED-talk style. Some of the points I have already heard a couple of times now over the last years, but I think it is good to repeat them from time to time to not forget them as the impact is quite big.

For example, he postulated that his children who are under 5 years old won't learn how to drive a car with a high probability because it will not be necessary in 15 years to do so. They will live in a highly personalized world, an aspect that we can feel already today. I get different advertisements and recommendations as you do. I get different offers and vouchers. My phone learnt my typing behavior. I can order a car that is customized based on my wishes. I can purchase sneakers that I designed online using AR and VR. This trend will increase more and more in the future. We will live in a machine-infused world even more than today. Instead of going to a human therapist we might use a conversational bot as it is easier to open myself to the latter one. Instead of interacting with humans we might interact with robots, something already happening in senior homes today (source); soon enough maybe also on a sexual level. Based on those developments, so Gutmann, the next big thing in AI will include team work between AI agents, collaboration, and trust.
AI doesn't kill jobs but tasks

Gutmann put up a number of 75m removed jobs whereas in the same timeframe a number of 133m new jobs are created by 2022. The economy will grow by 13 to 15.7 Trillion USD by 2030. Despite that huge impact, he said that less than 10,000 experts exist as of today who really understand the field and can move the needle into a one or another direction. He pointed out that Obama's strategy is worth to read (I guess he meant this paper) and that the US and Finland are frontrunners.

Closing Notes


Rise the awareness for AI as AI itself rises

One of the main intentions of the yearly Rise of AI conference is to bring people together and to strengthen Europe in the race, so Fabian closed the conference with the appeal that we all have to rise the awareness for AI as AI itself rises. In the community as well as in our companies and business partners. Go there where our economy was and perhaps still is strong but where also the biggest problems and opportunities are such as machine engineering (Maschinenbau), industrial production (Industriefertigung), automotive, ... The industries need the help of experts and visionaries and engineers to leverage todays technologies. One of the speakers, unfortunately I forgot who it was, mentioned that we can easily spend the next 10 years implementing the current research state and applying it to various fields.



The four stages were interesting, the speaker selection great, the food delicious, and the overall organization well planned and executed. I met a lot of interesting people, for example from the Berlin startup CleanRide, met people who studied in Harvard, VCs, visionaries, practitioners, lab leads, lab researchers, engineers - a colorful bunch of persons. I would have loved to hear a little bit more about detailed and/or technical stuff - not with mathematical formulas etc., we leave this to the research conferences such as NeurIPS - but with some insights into the tech stack, algorithms used by the various companies and startups which were presented on the floors, their team setups etc. The event was very well organized and the decoration and setup was great; though, if the location stays the same, I think there must be a concept of bringing fresh air into the smaller rooms before somebody is vaporized - not by killer AI but by missing oxygen and stickiness. Joking aside, I very much enjoyed the conference again!

Feel free to share feedback with me or to leave comments 🙂






During the conference, used Twitter handles were: RiseofAI and @Riseof_AI

Next year's conference will be at May 13th & 14th 2020 in Berlin.