Adv

Kotlin Android CameraX Object Detection Example

Android CameraX in Kotlin

Share

Related Concepts

Adv

2 Examples

  1. This is an android camerax object detection example. This example is written in Kotlin and supports Androidx. This example uses Firebase ML librar for object detection.

    Requirements

    This project, because it uses CameraX requires android API Level 21 and above.

    Build.gradle

    Go to your app level build.gradle and add dependencies as follows:

    dependencies {
        implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
    
        implementation "androidx.appcompat:appcompat:1.1.0-alpha05"
        implementation "androidx.core:core-ktx:1.2.0-alpha01"
        implementation "androidx.constraintlayout:constraintlayout:1.1.3"
    
        implementation "com.google.firebase:firebase-ml-vision:20.0.0"
        implementation "com.google.firebase:firebase-ml-vision-object-detection-model:16.0.0"
    
        implementation "androidx.camera:camera-core:$camerax_version"
        implementation "androidx.camera:camera-camera2:$camerax_version"
    }

    After adding dependencies apply the google services plugin:

    apply plugin: "com.google.gms.google-services"

    ObjectDetectionAnalyzer

    Create a class implementing the ImageAnalysis.Analyzer:

    class ObjectDetectionAnalyzer(private val overlay: GraphicOverlay) : ImageAnalysis.Analyzer {

    The graphic overlay is a custom view object created in the project to overlay the image.

    Prepare instance fields:

        @GuardedBy("this")
        private var processingImage: Image? = null
    
        private val detector: FirebaseVisionObjectDetector
    
        @GuardedBy("this")
        @FirebaseVisionImageMetadata.Rotation
        var rotation = FirebaseVisionImageMetadata.ROTATION_90
    
        @GuardedBy("this")
        var scaledWidth = 0
    
        @GuardedBy("this")
        var scaledHeight = 0

    NB/= GuarededBy is an annotation that denotes that the annotated method or field can only be accessed when holding the referenced lock. Read more here.

    Create an inti to initialize some of our Firebase ML classes:

        init {

    Inside the init initialize the FirebaseVisionObjectDetectorOptions:

            val options = FirebaseVisionObjectDetectorOptions.Builder()
                .setDetectorMode(FirebaseVisionObjectDetectorOptions.STREAM_MODE)
                .enableClassification()
                .build()

    then FirebaseVisionObjectDetector:

    detector = FirebaseVision.getInstance().getOnDeviceObjectDetector(options)

    Create a method to process the latest frame:

        @Synchronized
        private fun processLatestFrame() {
            val processingImage = processingImage
            if (processingImage != null) {
                val image = FirebaseVisionImage.fromMediaImage(
                    processingImage,
                    rotation
                )
    
                when (rotation) {
                    FirebaseVisionImageMetadata.ROTATION_0,
                    FirebaseVisionImageMetadata.ROTATION_180 -> {
                        overlay.setSize(
                            processingImage.width,
                            processingImage.height,
                            scaledHeight,
                            scaledWidth
                        )
                    }
                    FirebaseVisionImageMetadata.ROTATION_90,
                    FirebaseVisionImageMetadata.ROTATION_270 -> {
                        overlay.setSize(
                            processingImage.height,
                            processingImage.width,
                            scaledWidth,
                            scaledHeight
                        )
                    }
                }
    
                detector.processImage(image)
                    .addOnSuccessListener { results ->
                        debugPrint(results)
    
                        overlay.clear()
    
                        for (obj in results) {
                            val box = obj.boundingBox
    
                            val name = "${categoryNames[obj.classificationCategory]}"
    
                            val confidence =
                                if (obj.classificationCategory != FirebaseVisionObject.CATEGORY_UNKNOWN) {
                                    val confidence: Int =
                                        obj.classificationConfidence!!.times(100).toInt()
                                    " $confidence%"
                                } else ""
    
                            overlay.add(BoxData("$name$confidence", box))
                        }
    
                        this.processingImage = null
                    }
                    .addOnFailureListener {
                        println("failure")
    
                        this.processingImage = null
                    }
            }
        }

    Then override the analyze() method:

        override fun analyze(imageProxy: ImageProxy, rotationDegrees: Int) {
            val image = imageProxy.image ?: return
    
            if (processingImage == null) {
                processingImage = image
                processLatestFrame()
            }
        }

    Special thanks to @yanzm for creating this project.

    Find the whole project in the download.

  2. Adv
  3. Hi,

    CameraX.LensFacing.FRONT (FRONT Camera)

    I get the rectangle in the correct place if I keep my Object in the center of the camera view, but when I move my object left the rectangle moves to the right, and when I move my object to the right, the rectangle moves left.




Share an Example

Share an Example

Browse
What is the capital of Egypt? ( Cairo )