Please refer to https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.5/doc/doc_en/models_list_en.md to check language support models.
Just replace the .ChineseV3
in demo code with your speicific language, then you can use the language.
-
Install NuGet Packages:
Sdcb.PaddleInference Sdcb.PaddleOCR Sdcb.PaddleOCR.Models.Local Sdcb.PaddleInference.runtime.win64.mkl OpenCvSharp4.runtime.win
-
Using following C# code to get result:
FullOcrModel model = LocalFullModels.ChineseV3; byte[] sampleImageData; string sampleImageUrl = @"https://www.tp-link.com.cn/content/images2017/gallery/4288_1920.jpg"; using (HttpClient http = new HttpClient()) { Console.WriteLine("Download sample image from: " + sampleImageUrl); sampleImageData = await http.GetByteArrayAsync(sampleImageUrl); } using (PaddleOcrAll all = new PaddleOcrAll(model, PaddleDevice.Mkldnn()) { AllowRotateDetection = true, /* 允许识别有角度的文字 */ Enable180Classification = false, /* 允许识别旋转角度大于90度的文字 */ }) { // Load local file by following code: // using (Mat src2 = Cv2.ImRead(@"C:\test.jpg")) using (Mat src = Cv2.ImDecode(sampleImageData, ImreadModes.Color)) { PaddleOcrResult result = all.Run(src); Console.WriteLine("Detected all texts: \n" + result.Text); foreach (PaddleOcrResultRegion region in result.Regions) { Console.WriteLine($"Text: {region.Text}, Score: {region.Score}, RectCenter: {region.Rect.Center}, RectSize: {region.Rect.Size}, Angle: {region.Rect.Angle}"); } } }
-
Install NuGet Packages:
Sdcb.PaddleInference Sdcb.PaddleOCR Sdcb.PaddleOCR.Models.Online Sdcb.PaddleInference.runtime.win64.mkl OpenCvSharp4.runtime.win
-
Using following C# code to get result:
FullOcrModel model = await OnlineFullModels.EnglishV3.DownloadAsync(); byte[] sampleImageData; string sampleImageUrl = @"https://www.tp-link.com.cn/content/images2017/gallery/4288_1920.jpg"; using (HttpClient http = new HttpClient()) { Console.WriteLine("Download sample image from: " + sampleImageUrl); sampleImageData = await http.GetByteArrayAsync(sampleImageUrl); } using (PaddleOcrAll all = new PaddleOcrAll(model, PaddleDevice.Mkldnn()) { AllowRotateDetection = true, /* 允许识别有角度的文字 */ Enable180Classification = false, /* 允许识别旋转角度大于90度的文字 */ }) { // Load local file by following code: // using (Mat src2 = Cv2.ImRead(@"C:\test.jpg")) using (Mat src = Cv2.ImDecode(sampleImageData, ImreadModes.Color)) { PaddleOcrResult result = all.Run(src); Console.WriteLine("Detected all texts: \n" + result.Text); foreach (PaddleOcrResultRegion region in result.Regions) { Console.WriteLine($"Text: {region.Text}, Score: {region.Score}, RectCenter: {region.Rect.Center}, RectSize: {region.Rect.Size}, Angle: {region.Rect.Angle}"); } } }
- Use
sdflysha/sdflysha/dotnet6-paddle:2.5.0-ubuntu22
to replacemcr.microsoft.com/dotnet/aspnet:6.0
inDockerfile
as docker base image.
The build steps for sdflysha/dotnet6-paddle:2.5.0-ubuntu22
was described here.
- Install NuGet Packages:
dotnet add package Sdcb.PaddleOCR.Models.Local
Please aware in Linux
, the native binding library is not required, instead, you should compile your own OpenCV
/PaddleInference
library, or just use the Docker
image.
- write following C# code to get result(also can be exactly the same as windows):
FullOcrModel model = LocalFullModels.ChineseV3;
using (PaddleOcrAll all = new PaddleOcrAll(model, PaddleDevice.Mkldnn()))
// Load in-memory data by following code:
// using (Mat src = Cv2.ImDecode(sampleImageData, ImreadModes.Color))
using (Mat src = Cv2.ImRead(@"/app/test.jpg"))
{
Console.WriteLine(all.Run(src).Text);
}
// Install following packages:
// Sdcb.PaddleInference
// Sdcb.PaddleOCR
// Sdcb.PaddleOCR.Models.Local
// Sdcb.PaddleInference.runtime.win64.mkl (required in Windows, linux using docker)
// OpenCvSharp4.runtime.win (required in Windows, linux using docker)
byte[] sampleImageData;
string sampleImageUrl = @"https://www.tp-link.com.cn/content/images2017/gallery/4288_1920.jpg";
using (HttpClient http = new HttpClient())
{
Console.WriteLine("Download sample image from: " + sampleImageUrl);
sampleImageData = await http.GetByteArrayAsync(sampleImageUrl);
}
using (PaddleOcrDetector detector = new PaddleOcrDetector(LocalDetectionModel.ChineseV3, PaddleDevice.Mkldnn()))
using (Mat src = Cv2.ImDecode(sampleImageData, ImreadModes.Color))
{
RotatedRect[] rects = detector.Run(src);
using (Mat visualized = PaddleOcrDetector.Visualize(src, rects, Scalar.Red, thickness: 2))
{
string outputFile = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyPictures), "output.jpg");
Console.WriteLine("OutputFile: " + outputFile);
visualized.ImWrite(outputFile);
}
}
// Install following packages:
// Sdcb.PaddleInference
// Sdcb.PaddleOCR
// Sdcb.PaddleOCR.Models.Local
// Sdcb.PaddleInference.runtime.win64.mkl (required in Windows, linux using docker)
// OpenCvSharp4.runtime.win (required in Windows, linux using docker)
using PaddleOcrTableRecognizer tableRec = new(LocalTableRecognitionModel.ChineseMobileV2_SLANET);
using Mat src = Cv2.ImRead(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyPictures), "table.jpg"));
// Table detection
TableDetectionResult tableResult = tableRec.Run(src);
// Normal OCR
using PaddleOcrAll all = new(LocalFullModels.ChineseV3);
all.Detector.UnclipRatio = 1.2f;
PaddleOcrResult ocrResult = all.Run(src);
// Rebuild table
string html = tableResult.RebuildTable(ocrResult);
Raw table | Table model output | Rebuilt table |
---|---|---|
There is 3 steps to do OCR:
- Detection - Detect text's position, angle and area (
PaddleOCRDetector
) - Classification - Determin whether text should rotate 180 degreee.
- Recognization - Recognize the area into text
Default value: 1
This value has a positive correlation to the peak of memory usage that used by mkldnn
and a negative correlation to the performance when providing different images.
To figure out each value corresponding to the peak memory usage, you should run the detection for various images(using the same image will not increase memory usage) continuously till the memory usage get stable within a variation of 1GB.
For more details please check the pr #46 that decreases the default value and the Paddle document for MkldnnCacheCapacity
.
Default value: false
This directly effect the step 2, set to false
can skip this step, which will unable to detect text from right to left(which should be acceptable because most text direction is from left to right).
Close this option can make the full process about ~10%
faster.
Default value: true
This allows detect any rotated texts. If your subject is 0 degree text (like scaned table or screenshot), you can set this parameter to false
, which will improve OCR accurancy and little bit performance.
Default value: 1536
This effect the the max size of step #1, lower this value can improve performance and reduce memory usage, but will also lower the accurancy.
You can also set this value to null
, in that case, images will not scale-down to detect, performance will drop and memory will high, but should able to get better accurancy.
Please review the Technical details
section and read the Optimize parameters and performance hints
section, or UseGpu.
Please refer to this demo website, it contains a tutorial: https://github.com/sdcb/paddlesharp-ocr-aspnetcore-demo
In your service builder code, register a QueuedPaddleOcrAll Singleton:
builder.Services.AddSingleton(s =>
{
Action<PaddleConfig> device = builder.Configuration["PaddleDevice"] == "GPU" ? PaddleDevice.Gpu() : PaddleDevice.Mkldnn();
return new QueuedPaddleOcrAll(() => new PaddleOcrAll(LocalFullModels.ChineseV3, device)
{
Enable180Classification = true,
AllowRotateDetection = true,
}, consumerCount: 1);
});
In your controller, use the registered QueuedPaddleOcrAll
singleton:
public class OcrController : Controller
{
private readonly QueuedPaddleOcrAll _ocr;
public OcrController(QueuedPaddleOcrAll ocr) { _ocr = ocr; }
[Route("ocr")]
public async Task<OcrResponse> Ocr(IFormFile file)
{
using MemoryStream ms = new();
using Stream stream = file.OpenReadStream();
stream.CopyTo(ms);
using Mat src = Cv2.ImDecode(ms.ToArray(), ImreadModes.Color);
double scale = 1;
using Mat scaled = src.Resize(default, scale, scale);
Stopwatch sw = Stopwatch.StartNew();
string textResult = (await _ocr.Run(scaled)).Text;
sw.Stop();
return new OcrResponse(textResult, sw.ElapsedMilliseconds);
}
}
-
Remove
PaddleConfig.Default.*
settings because it's delted in2.6.0.1
-
Add one of following config in 2nd parameter in
PaddleOcrAll
:PaddleDevice.Openblas()
PaddleDevice.Mkldnn()
PaddleDevice.Onnx()
PaddleDevice.Gpu()
PaddleDevice.Gpu().And(PaddleDevice.TensorRt(...))
- Uninstall NuGet package: Sdcb.PaddleOCR.Models.LocalV3
- Install NuGet pakcage: Sdcb.PaddleOCR.Models.Local
- Update namespaces from
Sdcb.PaddleOCR.Models.LocalV3
toSdcb.PaddleOCR.Models.Local
To use TensorRT, just specify PaddleDevice.Gpu().And(PaddleDevice.TensorRt("shape-info.txt"))
instead of PaddleDevice.Gpu()
to make it work. 💡
Please be aware, this shape info text file **.txt
is bound to your model. Different models have different shape info, so if you're using a complex model like Sdcb.PaddleOCR
, you should use different shapes for different models like this:
using PaddleOcrAll all = new(model,
PaddleDevice.Gpu().And(PaddleDevice.TensorRt("det.txt")),
PaddleDevice.Gpu().And(PaddleDevice.TensorRt("cls.txt")),
PaddleDevice.Gpu().And(PaddleDevice.TensorRt("rec.txt")))
{
Enable180Classification = true,
AllowRotateDetection = true,
};
In this case:
DetectionModel
will usedet.txt
🔍180DegreeClassificationModel
will usecls.txt
🔃RecognitionModel
will userec.txt
🔡
NOTE 📝:
The first round of TensorRT
running will generate a shape info **.txt
file in this folder: %AppData%\Sdcb.PaddleInference\TensorRtCache
. It will take around 100 seconds to finish TensorRT cache generation. After that, it should be faster than the general GPU
. 🚀
In this case, if something strange happens (for example, you mistakenly create the same shape-info.txt
file for different models), you can delete this folder to generate TensorRT cache again: %AppData%\Sdcb.PaddleInference\TensorRtCache
. 🗑️