Skip to content
forked from CognEye/CognEye

CognEye application acts as bridge for visually impaired people to see the world through the features like object detection, optical character recognition, facial recognition, etc.

Notifications You must be signed in to change notification settings

jaivanti/CognEye

 
 

Repository files navigation

HTML Javascript CSS Ruby Java Kotlin Markdown bootstrap Android

CognEye

Contributors

Description

The cogneye app opens up with a logo screen, which further moves towards a screen which reads out App features. User can either click on any of the feature or speak out the name of the feature. Eventually the feature will activate camera of the smartphone. Once the Phone's camera is moved around an object/Letter/Paragraph, the voice is enabled and it reads out the object/letter/paragraph aloud. Currently, the app is still in development mode, cogneye team is working efficiently to improve the quality of the app.

Code snippet for splash screen

val SPLASH_TIME_OUT = 4000.toLong()
Executors.newSingleThreadExecutor().execute {
       Thread.sleep(SPLASH_TIME_OUT)
       startActivity(Intent(this, MainActivity::class.java))
       finish()
}

Code Snippet for OCR feature.

SparseArray<TextBlock> sparseArray = detections.getDetectedItems();
StringBuilder stringBuilder = new StringBuilder();

       for (int i = 0; i<sparseArray.size(); i++){
              TextBlock textBlock = sparseArray.valueAt(i);
               if (textBlock != null && textBlock.getValue() !=null){
                     stringBuilder.append(textBlock.getValue() + " ");
                    }
                }

final String stringText = stringBuilder.toString();

Code snippet for Object detection

private fun bindPreview(cameraProvider: ProcessCameraProvider){
        val preview = Preview.Builder().build()
        val cameraSelector =CameraSelector.Builder()
            .requireLensFacing(CameraSelector.LENS_FACING_BACK)
            .build()
        preview.setSurfaceProvider(binding.previewView.surfaceProvider)
        val imageAnalysis = ImageAnalysis.Builder()
            .setTargetResolution(Size(720,1280))
            .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
            .build()
        imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this),ImageAnalysis.Analyzer{ imageProxy ->
            val rotationDegrees = imageProxy.imageInfo.rotationDegrees
            val image = imageProxy.image
            if (image != null){
                val processImage= fromMediaImage(image,rotationDegrees)
                objectDetector
                    .process(processImage)
                    .addOnSuccessListener { objects ->
                        for (i in objects){
                            if(binding.parentLayout.childCount >1) binding.parentLayout.removeViewAt(1)
                            val element = Draw(context =this,
                                rect = i.boundingBox,
                                text = i.labels.firstOrNull()?.text ?: "Undefined")
                            binding.parentLayout.addView(element,1)
                            fun processTexttoSpeech() {
                                val objectname = i.labels.firstOrNull()?.text ?: "Undefined"
                                tts!!.speak(objectname, TextToSpeech.QUEUE_FLUSH, null, "")
                            }
                            processTexttoSpeech()
                        }
                        imageProxy.close()
}

Code snippet from android manifest file.

    <uses-permission android:name="android.permission.CAMERA"></uses-permission>
    <uses-permission android:name="android.permission.RECORD_AUDIO"></uses-permission>

Important Links

Technology stack

Tools and technologies that you learnt and used in the project.

  1. Java
  2. Kotlin
  3. HTML/CSS
  4. Javascript
  5. Python
  6. Android Studio

Applications

  1. CognEye will act as bridge for visually impaired people to see the world through the features like object detection, optical character recognition, facial recognition, etc.
  2. CognEye will activate camera and voice technologies in smartphones, to identify objects, letters, paragraph, etc around the user.
  3. It will also provides a inbuilt voice assitance to help the user with understanding the app features and its use.

Future scope

  1. We will deploy the facial recognition model that is already implemented.
  2. Implementation of Voice activation for opening the app and using the apps feature.
  3. Testing on the visually impaired people

Our First Prototype Screenshots

Screenshot alt text Screenshot alt text Screenshot alt text Screenshot alt text Screenshot alt text

About

CognEye application acts as bridge for visually impaired people to see the world through the features like object detection, optical character recognition, facial recognition, etc.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 71.9%
  • Kotlin 28.1%