I use the flowerchecker app http://www.flowerchecker.com/ where you have to pay a small fee per plant identification. It actually uses real human biologists to identify the plant.
I don't know a lot about the service, but two of the three experts are not students. The third is a PhD student which I hold in a higher regard in terms of knowledge than a typical undergrad college student. If they identify accurately I'd think it matters even less. Also, only $1 so you get what you pay for?
I did this for common plant diseases using Tensorflow on Android when learning how to use TF last year. Found the accuracy to be not that great with real world images that are not in the dataset (with a dataset of 50,000 images from PlantVillage). I think there are just too many visual similarities between various plants (especially when taken from various angles) that the network doesn't focus on the right things. Did a short writeup here:
It could work for a subset of plants, or potentially with a much larger training set - but I think NIR spectral/hyperspectral imaging would be the way forward here with more differentiating data points.
>I think there are just too many visual similarities between various plants
I get the impression that you tried to train a single classifier to diagnose disease in any species in the PlantVillage database.
You might get better results by training a separate classifier for each plant species (or starting with just one species, such as tomato, for which PlantVillage has 10 disease categories). A farmer knows what crop they're growing, so can select the correct classifier when they submit a photo.
Great suggestion, that's the approach I've been looking at most recently (manual selection of crop first, then classifier for only that crop's diseases) and it does give better results. Still, many plant diseases can look quite similar - more training data once we get it (and more variety in the photos) should help accuracy
Did you use inputs other than just image for identification? I would have thought that adding location and time of year alone could significantly improve results.
Not an expert -- would this be a result of training data doing a much better job isolating the subject vs a real world photo that may have other plants in the scene? Not that this is a solution in all cases, but is it possible your results would have improved with better real world pictures?
Not necessarily improved, because in real world photos you can have what I would baptise as "the voynich effect".
Plants in Voynich manuscript aren't real, can't even be classified in a family, but strangely still look familiar to us because they are "frankenplants". You can have exactly the same problem in photos of wild plants. It only takes the leaves of a climber growing over the flowers of other plant, or different flowers and fruits mixed together; and you'll have a new species. A very tricky one to identify. After scratching the head for a while humans can sense that something is wrong... machines normally can't see the problem. An (in)famous case is the photo of two juxtaposed black people arranged casually in a geometry that was tagged by the machine as 'gorilla'.
Likely but the whole point of 'AI' is that it should be able to identify flowers that don't look like what it's seen before. If not, it's just a huge lookup table that's like a memory repository.
This is, of course, a critical challenge in data science and is definitely not a trivial one to solve.
In some cases these are the goals, but dharma1 said the goal was to identify plant disease. If you can improve your results by taking better pictures then it becomes a tradeoff between training someone to take pictures and training someone to identify plant disease.
I think we have a tendency to treat AI as a silver bullet when we should be treating it as a tool we can use to help augment what we're already doing.
This product already exists. It's called Google Image Search. For instance, I was actually able to identify the second flower he takes a picture of in the video as Camellia Japonica simply from a screenshot of the video. I'd be blown away if they were able to assemble a better training set than Google. Kind of disappointing, as from the headline I was expecting something based on NIR spectroscopy, like the SciO molecular sensor (https://www.consumerphysics.com/)
There are two problems with this focus. First, lots and lots of photos on internet are incorrectly identified. I had seen web pages telling you how to include some edible spice in your dishes, with a stock photo of a similar species (very poisonous) incorrectly tagged as yummy.
Second, is probably a Camellia japonica... but could be also a Camellia x williamsii. And you need to know that there is a Camellia sazanqua also. A trained human can spot the "too big leaves for sazanqua" in miliseconds (or too much shiny, or suspiciously blue, or photoshopsly faked, etc) but this is not so easy for a program. Search image will not spot the differences and just let you with the most common option.
I don't know if the mention was intentional or not, but Goggles is actually dead as a product. They removed it from the iOS app, and the android app is not supported, while staying in the store.
In this respect, as the GP says, image search is currently the best option
Goggles (the concept) isn't dead. They moved the functionality into Google Assistant so you didn't need the app. Just hold down the home button so it takes a picture of the camera view.
Huh, I had no idea. An Easter egg, almost, as I've never seen this mentioned by Google.
It works ok, send pretty beta right now. I pointed my camera at a business card and pressed the home button. Google Assistant appeared and I asked it to tell me about what was on my screen. It noticed one piece of the address, and brought up a listing for that general area, but there was no way to further inquire into any of the rest of the card.
Sorry, I should have been more clear. It's moved into the "now on tap" feature, which now links to Google Assistant. You're right though, if you ask what's in your screen it has the same effect.
A great app, collaborative tools and community around plant identification is the french pl@ntnet http://identify.plantnet-project.org/ from Agropolis Fondation, Tela Botanica, INRIA, CIRAD, CNRS, INRA, IRD and Montpellier University
> Fast forward to today - the tech is available and the team is assembled to make PlantSnap a reality. With a beta version completed, and 250,000 images in the database
250,000 images doesn't sound like a large enough training set to be effective on anything but the most common plants.
To help others understand why, the "Deep Learning" book, which summarizes current state-of-the-art DL, advises having at least 5000 samples (images) per class for OK performance (equal to non-DL approaches) and 50k to 100k for state-of-the-art. A class here would be a plant species.
So even with 5k samples, the 250k image corpus would only have 50 species, using this rule of thumb. A good engineer could pick up DL and build a system that performs to this standard, because the tricks are all written down in the literature.
If they do better, they either exploit unpublished methods, or researched those methods themselves, with their researchers.
But there is a lot of transfer learning between classes. For example, the network may learn to detect "broad leaves" or "red stems" for a common plant. It will then be able to use those feature on less common plants, and require much less training data for them. Transfer learning has been shown to work with just a single image of a desired class.
It's also nice that life naturally fits into nested hierarchies because of evolution. So if you can recognize what family it belongs to, then that narrows down the possible sub families it can belong to. That in turn narrows down the possible clades it can belong to, etc, which narrows down the exact species. You couldn't find a more perfect use case for hierarchical softmax!
We did this http://www.luontoportti.com/suomi/en/ a couple of years ago. It is based on an elimination process – you describe what you see and the application narrows down on the options. Works very well in practice, and can be applied to all sorts of indentification purposes in the wild, such as fish, birds and butterflies.
We also gave the image regocnition path a thought but it seemed to be quite a tall order. Hopefully they come up with a novel approach on this!
I made it as an experiment, results vary and can be awfully wrong but it's funny to see the confusions the neural network can make.
Funny until somebody eat a deadly mushroom because of me, I guess.
I already put plenty of warnings, so fingers crossed, but it's hard to be sure it won't be misused.
This is an interesting problem that almost all guide book authors have to contend with, bringing knowledge of a topic may expose people to risk, but I think it's maybe a false conclusion.
It's a given all guides have false information, it then begs the question are guides worse than no guides? I think this can be easily answered more information is better.
You can give people information but you can't understand it for them.
I volunteered at Audubon for a decade. I've got all the books, charts. I hike and camp. I can't identify rocks, birds, fish, trees, clouds, etc to save my life.
I've always been incredibly curious about all the life around me but at the age of 35 it finally started to bug me that I had no idea what, for example, the dozens of types of birds around me were called. For the past few months I've started making it a point to identify every single bird that I see around the yard, to spend a few minutes locating a photo of it and listening to its call and reading information about it on Wikipedia. Just in the past few months I've identified over two dozen birds that I grew up seeing and hearing and now I finally know what they're called.
What I'm realizing is that it takes active participation and interest in indentifying things to really learn their names and their histories, not just a passing interest.
The Cornell Lab of Ornithology has a site [1] and an app that is amazing for identifying birds. My favorite feature is how it also lists birds that are similar, which makes it a lot easier to find the specific bird you're looking for. I then compare visual information with the auditory information of the birds calls to make a concrete identification.
this is made with convnets. so just point me to a dataset with a thousand examples per type, and I'll give you a working app in days. of course, finding and preparing that data is the real challenge
It's made by french researchers I think. It doesn't work perfectly but I did identify a lot of plants I don't know with it.
You can snap multiple pictures of the same plant, for example, 2 pictures of the leaves, 1 of a flower and 1 of the bark, and then use the combination to search. You can also submit your observations to have them identified by experts.
Only slightly related, but I've always wondered about an app that told you what crop was growing in the area you were driving through. Often they're recognisable, but sometimes they're not. No business case for it, just to satisfy curiosity.
This is very cool. I've been thinking about something similar, but only about mushrooms. Some people in my country pick wild mushrooms and sometimes it's very difficult to distinguish between the edible and the poisonous types. Such application, if existed could save many lives. The same concept could be used for other plants that people gather, like herbs and wild berries.
I've had good luck with plants.usda.gov identifying plants by geographical location. Just use the advanced search, choose the location you found the plant, e.g. state:county, and as much information as you can deduce, flower color, growth form, etc. It'll usually provide pictures and interesting things like whether Native Americans used it medicinally, etc.
http://www.gardenanswers.com app has the ability to automatically identify plants. If you can't find your plant, because it isn't blooming or is immature, or just isn't distinctive enough for image search, you can ask a horticulturist for a small fee.
Definitely an interesting concept. There's a bit more information on the indiegogo website [1].
Their problems they mention seem to be quite standard for image recognition (scale- and perspective variance), however I could imagine that this could be quite disastrous for plant recognition, given the massive diversity of plants: e.g. a leaf viewed from the side could equally well be a more narrow leaf from a different species.
As they write on [1]: "Our challenge comes from adapting our image recognition platform to recognize different shapes and sizes of the same plants, flowers and trees. We know this is possible, and we are close to cracking it."
Honestly they don't seem to be that close to cracking it, like at all.
They may be using camera phone images, but they are well framed images of very distinctive features. Trying to recognize a literal tree in a forest is going to be far harder, and recognizing a tree based on a dead leaf (as a photo towards the bottom suggests) can be very very tricky. Also I have to admit to being concerned that adding "any known plant" won't kill accuracy. The more classification endpoints the less likely you are to get a decent result, and already you have to deal with far greater scale differences than are usual (basically it would be like identifying the breed of dog from anything from a full picture to a photo of a single claw).
It's a nice idea, but I have a feeling it's never going to be better than human enthusiasts/experts, who you can get to identify plants from photos for free already[1].
I can immediately think of all kinds of challenges that are really hard to overcome: diseased leaves, different seasons, plus all the usual glare/shading/background issues.
Botantical classification is interesting but difficult. Once trained, it's fast enough, but can "deep learning" be anywhere near accurate enough?
I'd guess there'd be a focus on salient botantical features for classification, and perhaps the human can be enlisted to circle them out. There could be a "twenty questions"-type narrowing down, perhaps using images.
Dichotomous keys are mostly useless when you deal with photos. For some reason people always find the most useless chamera position when taking photos to unknown plants.
That's a two-edged sword; I've seen a few birds which, if the range diagrams in my field guides are to be believed, have no business being anywhere near where I am, and were probably up to no good. Granted plants move less quickly, but I'd hesitate to put all that much weight on location all the same.
Nice, will give a try. Hope you didn't stole my spot where I get my mushroom :P
Google Image is really working quite good. I made a lot of test. The only problem is about there's TOS. I can easily write something but I don't want to be flagged because I don't want use there API.
wut? I know it can be deadly but it's your responsibility to eat it or not. I think about tell information, like you want for other type of plant too, you want know if it's the right time to cut it, etc. nothing about eating or not. I don't need really an apps for finding some morels.
I would imagine a series of questions that you answer about the plant would help increase the accuracy. (Leaf patterns, etc.) Or, at least, whatever answer it comes up with should confirm the defining characteristics.
Having recently become the owner of a garden, this looks very interesting. I've been using the myGardenAnswers app, which purported to do the same thing. It's been singularly useless so far, though.
Glad to see this as it is something I have been interested in for a while. There are some online tools I have tried in the past but nothing that really seemed to work well.
or go get a book on plants and you will be able to identify them pretty easy. The guidebooks ask you questions to narrow it down (how many petals does the flower have? for example). you dont need any knowledge going into using one, and you'll learn to observe and appreciate plants more as well through the process of IDing! newcombs wildflower is the go-to for beginners on the east coast but there are many.
I trained my kids, first for mushrooms, then for flowers and trees. They are better then any app :). But definitely would love to have an app to confirm our findings sometimes.
Ha ha, good one. It is good that they love nature and it is nice if there are good companion apps. Some I saw, are not that great, so there is clearly need for more apps.
WTF 150K of his own money in 5 years? I don't think at a business level this is wise under any circumstances for a bootstrapped project.
Unless you have a ton of money to waste and this is your caprice. Of course.