Sign In  |  Register  |  About San Anselmo  |  Contact Us

San Anselmo, CA
September 01, 2020 1:33pm
7-Day Forecast | Traffic
  • Search Hotels in San Anselmo

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Nextech3D.ai Files Multiple Generative AI Patents Covering Breakthrough 3D-Model Creation For Global $5.5 Trillion Dollar Ecommerce Industry

The Company is filing multiple pivotal patents for its game-changing Generative AI

TORONTO, ON / ACCESSWIRE / March 21, 2023 / Nextech3D.AI (formally "Nextech AR Solutions Corp" or the "Company") (OTCQX:NEXCF)(CSE:NTAR)(FSE:EP2), a Generative AI-Powered 3D model supplier for Amazon, P&G, Kohls and other major e-commerce retailers is pleased to announce the Company has filed it's second in a series of patents for converting 2D photos to 3D models. These patents position the Company as a leader in the rapidly growing 2D photo -3D models transformation happening in the $5.5 trillion dollar global ecommerce industry estimated to be worth $100 billion. Nextech3D.ai is using its newly developed AI to power its diversified 3D/AR businesses including Arway.ai, (OTC: ARWYF / CSE ARWY) Toggle3D.ai and Nextech3D.ai.

Patent filing title: "Fixed-point diffusion for robust 2D to 3D conversion and other applications."

A major contributor to Nextech3D.ai's 3D modeling success and ability to meet market demand is its Generative Artificial Intelligence (AI). This patent builds on the Company's previous patents filed. Earlier this month, a patent was filed titled "Generative AI for 3D Model Creation from 2D Photos using Stable Diffusion with Deformable Template Conditioning", and late last year the Company filed a patent for creating complex 3D models by parts. The game-changing AI technology underpinning these patents places the Company in a leadership position in the 3D modeling for ecommerce space and positions the Company to generate significant revenue acceleration and cash flow in 2023 and beyond.

Building on the Company's previous patents, Nextech3D.ai will be using fixed-point diffusion for learning to construct 3D models from 2D reference photos, starting with simpler objects, and individual parts, before expanding to more complex, multi-part objects.

Nima Sarshar, Chief Technology Officer of Nextech3D.ai commented, "With the development of our fixed-point diffusion models, we are excited to offer a new reliable and innovative way to generate 3D models at scale from 2D reference photos. Our new patent application highlights our commitment to driving innovation in the field of generative AI, and we look forward to continued success and advancement."

Diffusion models prescribe a solution for creating 3D models from 2D reference photos, either as a whole, or part-by-part by evolving differentiable, deformable templates to convert into 3D parts, conditioned on one or more reference photos of the part. As previously announced, over the last several years Nextech3D.ai has been building tens of thousands of high-quality, fully textured, photo-realistic 3D assets, with hundreds of thousands of individual parts. These parts get harvested into Nextech3D.ai's "part library", synthetically rendering them from random views, and using them to train new diffusion models that are able to reconstruct 3D mesh parts from reference photos. The Company's first clean dataset with 70,000+ 3D objects and more than 2.2M synthetically rendered reference photos are now ready for training. This is still a tiny portion of all the parts and assets in its model library, and yet, it is already larger than the largest publicly available 3D dataset called ShapeNet, with its 51K models of varying quality.

Technical Explanation
Diffusion deep-learning models have been successful in creating realistic images by adding noise to a training example and using a neural network to estimate and remove the noise at each step. The general idea is as follows: starting from a training example, say an image, noise is successively added to the example. A neural network, usually a U-Net, learns to estimate and remove the noise from the noisy sample at each step. To create new novel images, one starts with a sample from a pure noise distribution, and the noise is successively estimated and removed using the same U-Net, until one converges into a (hopefully) realistic input image. "Conditioning" data, such as embeddings of textual prompts, is provided as side-information during the training process. At sampling time, a conditioning data provided by the user will steer the backward diffusion process towards an image that is relevant to the user's input.

Each time a diffusion model is sampled to generate an image, by design, it will generate an independent image. This allows for generating a virtually infinite number of images. However, there is no ground truth for the validity of the image generated. The quality of the resulting image, and its relevance to the prompt is rather subjective.

To use diffusion models to turn 2D reference photos to 3D models, one can think of 2D reference images as conditioning prompts, and hope to recover the 3D model the 2D photos correspond to. The issue is, among other things, that the backward diffusion process will end up generating a different 3D model upon convergence. Although, Nextech3D.ai has filed a breakthrough provisional patent application that addresses this issue, by prescribing a new variation of diffuse models we call fixed-point diffusion, that is capable of reliably generating 3D models from 2D photos, where there is only a single ground truth correspondent to the conditioning data (I.e., 2D reference images).

Nextech3D.ai, Tuesday, March 21, 2023, Press release picture

Pictured above: A diffusion model creating 4 images from the prompt "A scientist riding an elephant on moon, cartoon style." Each image starts with an independent sampling of random Gaussian noise and will end up generating an independent image that is still relevant to the input conditioning prompt.

With a new wave of generative AI systems, the world is entering a period of generational change where entire industries have the potential to be transformed. Due to its advances in AI the Company believes it is perfectly positioned to be the supplier of choice for the global $5.5 trillion ecommerce industry as it pivots from 2D-3D models, which is estimated to be worth $100 billion.

To learn more, please follow us on Twitter, YouTube, Instagram, LinkedIn, and Facebook, or visit our website: https://www.Nextechar.com.

For further information, please contact:

Investor Relations Contact
Lindsay Betts
investor.relations@Nextechar.com
866-ARITIZE (274-8493) Ext 7201

Nextech3D.ai
Evan Gappelberg
CEO and Director
866-ARITIZE (274-8493)

About Nextech3D.ai
(formally "Nextech AR Solutions Corp" or the "Company") (OTCQX: NEXCF) (CSE: NTAR) (FSE: EP2 is a diversified augmented reality, AI technology company that leverages proprietary artificial intelligence (AI) to create 3D experiences for the metaverse. Its main businesses are creating 3D WebAR photorealistic models for the Prime Ecommerce Marketplace as well as many other online retailers. The Company develops or acquires what it believes are disruptive technologies and once commercialized, spins them out as stand-alone public Companies issuing a stock dividend to shareholders while retaining a significant ownership stake in the public spin-out.

On October 26, 2022 Nextech3D.ai spun out its spatial computing platform, "ARway" as a stand alone public Company. Nextech3D.ai retained a control ownership in ARway Corp. with 13 million shares, or a 50% stake, and distributed 4 million shares to Nextech AR Shareholders. ARway is currently listed on the Canadian Securities Exchange (CSE:ARWY), in USA on the (OTC: ARWYF) and Internationally on the Frankfurt Stock Exchange (FSE: E65). ARway Corp. is disrupting the augmented reality wayfinding market with a no-code, no beacon spatial computing platform enabled by visual marker tracking.

On December 14, 2022 Nextech announced its second spinout of Toggle3D, an AI-powered 3D design studio to compete with Adobe. Toggle3D is expected to be public in the first half of 2023.

To learn more about ARway, visit https://www.arway.ai/

Forward-looking Statements
The CSE has not reviewed and does not accept responsibility for the adequacy or accuracy of this release.

Certain information contained herein may constitute "forward-looking information" under Canadian securities legislation. Generally, forward-looking information can be identified by the use of forward-looking terminology such as, "will be" or variations of such words and phrases or statements that certain actions, events or results "will" occur. Forward-looking statements regarding the completion of the transaction are subject to known and unknown risks, uncertainties and other factors. There can be no assurance that such statements will prove to be accurate, as future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward-looking statements and forward-looking information. Nextech will not update any forward-looking statements or forward-looking information that are incorporated by reference herein, except as required by applicable securities laws.

SOURCE: Nextech3D.ai



View source version on accesswire.com:
https://www.accesswire.com/744773/Nextech3Dai-Files-Multiple-Generative-AI-Patents-Covering-Breakthrough-3D-Model-Creation-For-Global-55-Trillion-Dollar-Ecommerce-Industry

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 SanAnselmo.com & California Media Partners, LLC. All rights reserved.