These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Reproducibility and the Philosophy of Data with Clare Gollnick: Sketchnotes from TWiMLAI talk #121: Reproducibility and the Philosophy of Data with Clare Gollnick You can listen to the podcast here. In this episode, i’m joined by Clare Gollnick, CTO of Terbium Labs, to discuss her thoughts on the “reproducibility crisis” currently haunting the scientific landscape.
Since I migrated my blog from Github Pages to blogdown and Netlify, I wanted to start migrating (most of) my old posts too - and use that opportunity to update them and make sure the code still works. Here I am updating my very first machine learning post from 27 Nov 2016: Can we predict flu deaths with Machine Learning and R?. Changes are marked as bold comments. The main changes I made are:
In our next MünsteR R-user group meetup on Monday, June 11th, 2018 Thomas Kluth and Thorben Jensen will give a talk titled Look, something shiny: How to use R Shiny to make Münster traffic data accessible. You can RSVP here: http://meetu.ps/e/F7zDN/w54bW/f About a year ago, we stumbled upon rich datasets on traffic dynamics of Münster: count data of bikes, cars, and bus passengers of high resolution. Since that day we have been crunching, modeling, and visualizing it.
On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup - so if you’re intersted: the slides can be found here: https://www.slideshare.net/ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890 Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria.
These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Systems and Software for Machine Learning at Scale with Jeff Dean: Sketchnotes from TWiMLAI talk #124: Systems and Software for Machine Learning at Scale with Jeff Dean You can listen to the podcast here. In this episode I’m joined by Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, who I had a chance to sit down with last week at the Googleplex in Mountain View.
On April 4th, 2018 I gave a talk about Deep Learning with Keras at the Ruhr.Py Meetup in Essen, Germany. The talk was not specific to Python, though - so if you’re intersted: the slides can be found here: https://www.slideshare.net/ShirinGlander/ruhrpy-introducing-deep-learning-with-keras-and-python Ruhr.PY - Introducing Deep Learning with Keras and Python von Shirin Glander There is also a video recording of my talk, which you can see here: https://youtu.
In our next MünsteR R-user group meetup on Tuesday, April 17th, 2018 Kai Lichtenberg will talk about deep learning with Keras. You can RSVP here: http://meetu.ps/e/DDY1B/w54bW/f Although neural networks have been around for quite a while now, deep learning really just took of a few years ago. It pretty much all started when Alex Krizhevsky and Geoffrey Hinton utterly crushed classic image recognition in the 2012 ImageNet Large Scale Visual Recognition Challenge by implementing a deep neural network with CUDA on graphics cards.
- OLDER POSTS
- page 1 of 6