Masm32 SDK description, downloads and other helpful linksMessage to All Guests
Started by jj2007, May 15, 2021, 07:59:58 AM
Quote from: jj2007 on May 15, 2021, 07:59:58 AMIntel scientists
Quote from: jj2007 on May 15, 2021, 07:59:58 AMJust found this video explaining what Intel scientists plan for GTA V... impressive
QuoteBut in contrary to Antonio Nesic it is not, because an Assembly language base of AI would not speed that whole thing up by factor 5 to 1000 (~30 optimizing C->ASM, ~30 HLL->C, both lower estimate, ~200 for cache optimization and tight ASM loops inside cache vs distributed, scattered OOP code, ~factor 10k if databases are involved, estimate of 1k is very conservative). But because people are ignorant and especially when it comes to AI you have all those HLL freaks, that actually have no idea how slow their toy and prototype-languages make their project.And AI almost never sees any optimization in it's life. I worked around '96 on a C++ based semantic network software ("Peirce") at the Uni-HH. And the crap I had to see was really bad and unbearable code.Well, it worked, but it was some of the worst code I've ever seen.
Quote from: LiaoMi on May 15, 2021, 09:26:42 PMHas an AI ever been written completely in assembly language?
Quote from: HSE on May 16, 2021, 01:45:29 AMQuote from: LiaoMi on May 15, 2021, 09:26:42 PMHas an AI ever been written completely in assembly language?http://masm32.com/board/index.php?topic=8583.0
QuoteHas an AI ever been written completely in assembly language?
Quote from: LiaoMi on May 16, 2021, 03:13:30 AMQuoteHas an AI ever been written completely in assembly language?Probably it meant a fully scalable system. In a system where you can dynamically perform a variety of calculations with the ability to connect to other projects.
Quote from: daydreamer on May 16, 2021, 08:42:07 PMLiaoMiproducing things like several 500poly meshes and some textures are hardly worth put thru all kinds of techniques of reducing it to only visible polys on modern hardware,you probably faster to let gpu brute forceiits moved from cpu caps to get gpu caps,to see if it has instancing and other things and lfast enough and enough vram to handle the gameso I dont have the productivity of a 100 member development team,to make a very big game,to fill many GB data,so I never can make anything that use full potential of a 2021 gaming pcbut I enjoyed assembly size reduction from start and now its also fun to first make lowpoly 3dmodels and reduce polysbut use highend 2021 gaming pc,helps when developing 3d be faster and easier AI ala neural networks emulating,wonder if you add a drunk macro where it selfadjusts you could emulate drunk behaviour for example a human 3dmodel or robot walking around controlled by AI,is adjusted by drunk macro to overcompensate,you would see "a drunk walking around"?