![]() ![]() This PR adds a crop splitting feature in the recognition predictor to split too long boxes in small ones before feeding the recognition model. GPU models and configuration: GPU 0: GeForce RTX 2060ĬuDNN version: Probably one of the following: ![]() I expected the weights to be loaded properly Environment DocTR version: 0.3.0a0 I CLONE CHARACTERS ZIP FILEZipfile.BadZipFile: File is not a zip file Raise BadZipFile("File is not a zip file") Yields: Traceback (most recent call last):įile "/home/laptopmindee/doctr/text.py", line 8, in įile "/home/laptopmindee/doctr/doctr/models/utils/tensorflow.py", line 50, in load_pretrained_paramsįile "/usr/lib/python3.8/zipfile.py", line 1269, in _init_įile "/usr/lib/python3.8/zipfile.py", line 1336, in _RealGetContents Master = recognition.MASTER(vocab=VOCABS, input_shape=(32, 128, 3)) To Reproduce from doctr.models import recognitionįrom import load_pretrained_params ![]() I can't load MASTER weights in TF with the load_pretrained_params function. Which you can check in the linkĪnd you will have the final version of your glb without any issues.The End to end ocr named doctr developed by you is fantastic.It is very easy to use and have very good results.Currently i am working on ocr related project.I had implemented doctr on sample images and have received good results.However I had few question which i list below and would be grateful for receiving explanatioins on them.ġ)Which dbresnet50 model are you using?pretrained on synthtext dataset or tested on real world dataset as mentioned in the paper?ģ)Is their anyway we can get output after detection without postprocessing?Ĥ)how can we improve accuracy of detection?ĥ)when would your private dataset be available?Ħ)How much training data we need to get good results on our dataset?(dataset type would be forms,invoices,receipts etc)ħ)Also you have mentioned that to train the model Each JSON file must contains 3 lists of boxes.Why 3 boxes are needed for single image? question You may see it has taken around 54 sec to export, so get panic. You have no need to select any mesh from your viewport, just export it. In this adjacent image you may see there is a glb option, we have to select the option and give the file name. arnold shaders as you can see in the image below.Īnd finally after converting all the material into arnold shader you will have the final result which you have to export into glbįor that you have to go to babylon> babylon file exporter which will open a dialogue box. Then you have to convert every material to ai standard materials i.e. You may see the transparency issue after import which you can resolve by breaking the transparency map from the material slot. Now you have to import the fbx (exported from CC3) in Maya, you can in the given image You may see I have already installed it via the set up provided in the installation folder.Īnd go to windows>Setting/Preferences>Plug-in managerĪnd you have to write Maya 2 and check the optionsĪnd now you may see the option in the menu bar,like the given image below I CLONE CHARACTERS INSTALLSo 1st we need to install Babylon js exporter for autodesk maya, here I am sharing the link of the plugin. So we have a bundle of texture along with a 3D object which we can see directly in our web browsers.Īnd when it comes to exporting the glb(gltf) mesh and motion from CC3 and iclone, we generally face some issues, so today we will discuss how we can export it for sandbox. ![]() It is an API-neutral runtime asset delivery format developed by the Khronos Group 3D Formats. As we know glTF is a file format for 3D scenes and models using the JSON standard. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |