• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Tim Cooke
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • Junilu Lacar
  • Rob Spoor
  • Jeanne Boyarsky
Saloon Keepers:
  • Stephan van Hulst
  • Carey Brown
  • Tim Holloway
  • Piet Souris
Bartenders:

Why aren't images being acquired by my analyze() method?

 
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm currently working on an Android application (using Java on Android Studio) that captures images from a live camera preview and analyzes/labels them. The two components of the application use a CameraX preview as well as a Google ML Kit Object Detection Documentation.

While the CameraX preview works absolutely fine, the Google ML Kit is where I encounter some challenge. My Google ML Kit Object Detection does not carry out its intended purpose; detecting objects. To further add to my confusion, no errors are being detected, before and after running the program. All I know for sure is that the application does not process the image within my analyze() method, and therein lies the problem.

How do I know this? I added some Log.d("TAG", "onSuccess " + detectedObjects.size()); within my onSuccess method to determine the size of the returned object list, only for the Android Studio logcat to print D/TAG: onSuccess0 upwards of up to 20 times within the span of several seconds of running the application. Even within the 4 steps outlined within the aforementioned Google ML Kit documentation, Step 2: Prepare the input image & Step 3: Process the image are where I'm instructed to write the following code.

Rather than bombard this post with the entire page of code, I've chosen to provide the analyze() method where the image preparing and processing take place;


While I understand that this may seem like an Android question, the Java code is the focus of the issue. The problem indeed lies with what exactly is wrong with the code that prevents images from being acquired by the aforementioned method. I have been scouring the the internet for weeks for any guidance on how to further resolve this issue. Any further information needed to supplement this question will be provided upon request!
 
Bartender
Posts: 7488
171
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The first question is: is this code actually being run? Meaning, is mediaImage != null?
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
@Tim Moores The reasoning for placing mediaImage != null there is done according to this exact step of the Google ML Documentation, which is needed to prepare the input image. I should probably clarify that this I am fairly new to Java programming, and so I was following the documentation in order to follow this through to its end result. I'm unsure of whether mediaImage != is indeed null, is there any method I should use to double check?

Because I was simply following the documentation, I must ask; would the line of if (mediaImage != null) { ..} mean that if mediaImage wasn't null, then the code should continue on to what I've typed within the braces? and if null references to the fact that mediaImage is empty, how would I resolve this?

 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
So update! I Log.d(..) the following code to see if it was throwing null like so;



In which, the console did print D/TAG: mediaImage is throwing null.

So the application launches just fine. But considering that mediaImage is null, my question now stands to why mediaImage is showing up empty when I'm following verbatim from official Google ML documentation.
 
Marshal
Posts: 27578
88
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mason Buchanan wrote:So update! I Log.d(..) the following code to see if it was throwing null like so;



I don't know what you mean by "throwing null". The code you have there logs the message if the mediaImage variable is NOT null. If it was null then yes, nothing would happen. But it isn't, so your posted code is all being run. (Which answers Tim's question.)

As for your actual problem, I don't know anything about that topic so I can't give you any help there, sorry.
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Paul Clapham wrote:

Mason Buchanan wrote:So update! I Log.d(..) the following code to see if it was throwing null like so;



I don't know what you mean by "throwing null". The code you have there logs the message if the mediaImage variable is NOT null. If it was null then yes, nothing would happen. But it isn't, so your posted code is all being run. (Which answers Tim's question.)

As for your actual problem, I don't know anything about that topic so I can't give you any help there, sorry.



You're absolutely right. I should not have typed "throwing null" knowing that != meant that if the mediaImage wasn't null. My apologies. Thank you for clearing it up! Hopefully somebody will be able to clarify. By any chance, would you be able to know where I could turn to for any sort of guidance in resolving this? Especially since I've tried StackOverflow and Reddit.
 
Paul Clapham
Marshal
Posts: 27578
88
Eclipse IDE Firefox Browser MySQL Database
  • Likes 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Looking at your code: lines 27 to 38 don't do anything, no matter what the object detection process does. Line 27 produces an empty array and the following lines process all of the entries in that empty array, i.e. they always do nothing.

I know, your logging code says that zero objects were detected. But if the tutorial contains code like that, it's not impossible that it contains other bad code, for example bad code which prevents detection of objects.

Like I said I have no experience with this topic, I'm just looking at the code and drawing speculative conclusions from it.
 
Tim Moores
Bartender
Posts: 7488
171
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Paul Clapham wrote:Line 27 produces an empty array and the following lines process all of the entries in that empty array, i.e. they always do nothing.


Good catch! Line 27 should be deleted, and line 28 should use "detectedObjects" instead of"results".
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Tim Moores wrote:

Paul Clapham wrote:Line 27 produces an empty array and the following lines process all of the entries in that empty array, i.e. they always do nothing.


Good catch! Line 27 should be deleted, and line 28 should use "detectedObjects" instead of"results".



Alright, so I've deleted Line 27. I've then proceeded to turn Line 28 into;



After doing this, I then proceeded to run the application and try to "Log.d("TAG", "onSuccess " + detectedObjects.size());" the onSuccess method on line 25, which still prints "D/TAG: onSuccess0", which still means the return object list size is 0, right? Seeing as though Line 27 no longer exists to produce an empty array, is there any reason why the following lines aren't processing the formerly empty arrays?

Once again, I must thank you both for your help thus far! It truly means a lot, considering this is the first real progress I have made in a while with this problem. However, the question still stands.
 
Greenhorn
Posts: 6
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I tried to write a minimal version and it works fine for me. Could you try it?



Include the Camera permission in AndroidManifest


Layout:


To check log:


Let me know if this works for you. If so, we can dig deeper into the diff between your code and this.


BTW, where did you see the problematic code example on line 27:  .  That doc need to be fixed.
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
@StevenLetson First of all, I just want to thank you so much for taking your time out and writing out a minimal version of the program that uses the same components. Upon seeing your response, at first I was unclear whether I was supposed to create a new class or create a new project so I ended up doing the latter; starting a new project altogether. From there, I had used the code written for the "public final class DebugActivity" in the Main Activity. For the most part, I was able to write out the code smoothly. the only errors I got were in the onCreate method, as displayed with the double slashes:



Aside from this, everything else in your code had transitioned smoothly! These are the only errors I had received and I sincerely hope you can clear the air as to how or why to resolve them. Hopefully, when this is said and done, we can move forward on seeing what was wrong with my code.

Oh, and you also wanted to know where Line 27 from my OP was from. I was actually following a specific StackOverflow question not too different from my situation, except this user used a custom model, whereas I used a base model. I figured that some similarity in writing our code would solve my problems. Which shouldn't have been the case.

Edit: to specifically mention Steven Letson on his reply at the beginning of my message
 
Steven Letson
Greenhorn
Posts: 6
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My bad. I actually missed one class. Please create this class as well.


 
Steven Letson
Greenhorn
Posts: 6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Mason,

Based on your questions and the interaction we had so far, I can feel you might be new to Android development. In this case, I would suggest you to invest a bit more time on learning Android and Java/Kotlin basics first, which could probably help improve you productivity for long term.

You could find some really good beginner courses on Coursera, edX, Udemy, Udacity......  try listen to the intros and find one you are comfortable with.  Hope you can learn things more effectively from those courses and become better and better in creating good apps.
 
Greenhorn
Posts: 23
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi!

I've just discovered this thread since I've had similar issues with face detection in google's ML Kit: https://stackoverflow.com/questions/67217419/why-is-googles-ml-face-detection-kit-crashing-on-process(even got so desperate I added a bounty lol). I really appreciate all the insight you've provided.

I tried running the code you provided, added all the dependencies(cameraX, google ML Object Detection Kit), and added camera permission but it's just showing a black screen for me which I guess is the PreviewView. Any idea what may be causing this?

Thanks.

 
Steven Letson
Greenhorn
Posts: 6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Joe,

Have you tried to build and run the ML Kit quick start app: https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart ?  Does the CameraXLivePreview in that app work correctly?
My code is actually a simplified version from that sample code. I verified it works fine on my phone, but I might miss something. For example, I removed code related to permission https://github.com/googlesamples/mlkit/blob/4645ec6011d5373283b31b79a12fd62f48af3c5f/android/vision-quickstart/app/src/main/java/com/google/mlkit/vision/demo/java/CameraXLivePreviewActivity.java#L440-L497   However, it might be required on certain Android OS level I guess.   You could try adding permission related code back to see if it solves your case.

 
Joe McMillan
Greenhorn
Posts: 23
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi,

Tried that, not working.

I have a similar problem in my faceDetector app(https://coderanch.com/t/741943/mobile/Google-ML-Face-Detection-Kit). I looked through these posts to see if anything could help me, but I couldn't find anything related to real time face detection. However, it seems my problem is instead related to the process method. Landmark detection works for me(analyse method in my case), but the real time face detection is what I'm struggling with a lot(process method--process face frame by frame). If you could help me, I would really appreciate it. Been struggling for several weeks now on this problem. Thank you so much.

Regards,
Krish
 
Steven Letson
Greenhorn
Posts: 6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Joe,

If the ML Kit quick start app: https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart doesn't work for you, there might be a bigger problem, not specific to your code.

(1)Does the StillImageActivity or LivePreviewActivity in the quickstart work for you?

(2)What device are you using?

(3)Could you try a different device to see if the ML Kit quick start app work on any device for you at all?

(4)Could you try a CameraX tutorial without MLKit to understand whether the issue is related to CameraX or ML Kit?
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
@StevenLetson

Your code does indeed work! The onSuccess method logs; "D/DebugActivity: detectedObjects.size=1" within my Logcat, which confirms that the application is carrying out its intended purpose of detecting objects, right? Which makes me wonder why my code in the OP isn't doing the same. Would you happen to know why?
 
Joe McMillan
Greenhorn
Posts: 23
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
@StevenLetson, I enabled webcam for my camera on the emulator through advanced settings, but it's still showing a blank black screen for me. I also tried running it on a different API 30 device, but still no luck. Also, what may be a reason that cameraX doesn't work on Android Studio? Thanks.
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Joe McMillan wrote:@StevenLetson, I enabled webcam for my camera on the emulator through advanced settings, but it's still showing a blank black screen for me. I also tried running it on a different API 30 device, but still no luck. Also, what may be a reason that cameraX doesn't work on Android Studio? Thanks.



Hi Joe,

I would recommend you run the program on a device that this application was written for. When launching on the device, allow your camera permission to be used in the application.

As far as if CameraX not working, I'd recommend writing an application solely with CameraX preview, then using this StackOverflow question and answer in resolving the black screen issue as I used to face the same issue. Hope this works!
 
Joe McMillan
Greenhorn
Posts: 23
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi, thanks. Anything you've come across that might be able to help me with this post: https://coderanch.com/t/741943/mobile/Google-ML-Face-Detection-Kit? I'm not using cameraX, but rather a custom camera API. Thanks!
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
@JoeMcMillan Unfortunately not, this thread pertains to the question asked in the OP. I will see what I can do to help within your thread,
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Steven Letson wrote:
Let me know if this works for you. If so, we can dig deeper into the diff between your code and this.



@SteveLetson

Alright, so seeing as though your code used a lambda expression to implement the analyzer within the bindAllCameraUseCases() method, would I have to do the same to achieve the same result with my code? Which would also mean basically restructuring the program as a whole. My confusion in the OP was if there was a flawed/missing line(s) within the analyze() method that prevented any data from being acquired by it. That's the reason why I aim to maintain the structure that I have within the OP is because I followed the documentation to the T, and didn't see the as someone who did something similar using Google ML Kit and CameraX as well, except they had used a custom model instead of a base model.

The only thing I will be adding to this minimal version is code to label each object. Also, just to confirm "D/DebugActivity: detectedObjects.size=1" does mean that onSuccess has returned an object list size and is therefore acquiring images passed by the camera preview?
 
Steven Letson
Greenhorn
Posts: 6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
If you can see "D/DebugActivity: detectedObjects.size=1", it means it detected one object successfully.

Now, I think you can spend some time trying to compare the minimal version and your code to see if there is any typo or missing parts or wrong order of certain calls. Probably a very subtle but critical difference. Let us know what you find.
 
Mason Buchanan
Ranch Hand
Posts: 35
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
@StevenLetson, Alright so after much configuration, I had decided to implement some of your code with the a lambda expression...


and it printed "D/TAG: onSuccess1", which means objects were detected in the camera preview. Your code worked! Consider your answer a success!

...Now, I just need to move onto labelling.
 
Well don't expect me to do the dishes! This ad has been cleaned for your convenience:
The Low Tech Laboratory Movie Kickstarter is LIVE NOW!
https://www.kickstarter.com/projects/paulwheaton/low-tech
reply
    Bookmark Topic Watch Topic
  • New Topic