That's a lot of very fast memory)Ġ1:24PM EDT - 5.3TF FP64, 10.6TF FP32, 21.2TF FP16, 14MB SM Register File, 4MB L2 CacheĠ1:23PM EDT - "The most ambitious project we have ever undertaken"Ġ1:22PM EDT - AI needs more computing power than what is currently availableĠ1:22PM EDT - "We simply don't have enough computing horsepower"Ġ1:18PM EDT - Teaching AI to draw landscapes inspired by those imagesĠ1:18PM EDT - Training it with Romantic-era imagesĠ1:17PM EDT - Teaching a neural network to paintĠ1:17PM EDT - Demo time: Facebook AI ResearchĠ1:16PM EDT - Jen-Hsun wants to move from supervised, labor-intensive learning to unsupervised learningĠ1:14PM EDT - GIE: 20 images/s/W on the Tesla M4Ġ1:14PM EDT - "There's no reason to use FPGAs. This architecture provides higher density interconnects, decreases global interconnect length, and lightens associated RC loading resulting in enhanced performance and reduced power consumption on a smaller form factor."Ġ1:28PM EDT - Chip on Wafer on Substrate, the largest such chip ever madeĠ1:27PM EDT - Jen-Hsun is "very frickin excited" about itĠ1:27PM EDT - Pascal, 16nm FinFET, Chip-On-Wafer-On-Substrate, NVLink, and New AI AlgorithmsĠ1:26PM EDT - The Tesla P100 is "5 miracles"Ġ1:26PM EDT - (150B Transistors is undoubtedly counting the RAM, BTW)Ġ1:26PM EDT - This is using the previously announced mezzanine connector with on-package memoryĠ1:25PM EDT - (14MB is huge for a register file, BTW. 5x the aggregate speed of PCIe 3.0Ġ1:29PM EDT - "TSMC CoWoS® (Chip-On-Wafer-On-Substrate) services use Through Silicon Via (TSV) technology to integrate multiple chips into a single device. 8 P100s in a hybrid cube meshĠ1:34PM EDT - A full deep learning rackmount serverĠ1:33PM EDT - But if it's 600mm2 for just the die, that's a huge jump in the size of dies being produced on 16nm/14nm TSMC/Samsung FinFETĠ1:33PM EDT - Need to get confirmation on whether 600mm2 is just the GPU die, or if they're counting other parts as wellĠ1:32PM EDT - P100 servers coming in Q'17Ġ1:31PM EDT - P100 in volume production todayĠ1:30PM EDT - NV wanted new algorithms to take advantage of the hardwareĠ1:29PM EDT - Recapping NVLink. The name of the card is BB-8"Ġ2:05PM EDT - PX2 in the car, DGX-1 in the cloudĠ2:05PM EDT - (Jen-Hsun is prepared for zoom photos this time)Ġ2:04PM EDT - Drive PX2 uses two unannounced Pascal GPUsĠ2:04PM EDT - Drive PX2 in Jen-Hsun's handsĠ2:03PM EDT - Baisu is working on an NVIDIA-powered self-driving car computer as wellĠ2:02PM EDT - Demoing DriveNet running at 180fps running on the smallest Drive PXĠ1:56PM EDT - Recap: Tesla M40 for hyperscale, K80 for multi-app HPC, P100 for scales very high, and DGX-1 for the early adoptersĠ1:54PM EDT - First DGX-1s will be going to research universitiesĠ1:54PM EDT - NVIDIA is taking orders starting todayĠ1:50PM EDT - NVIDIA has adapted TensorFlow for VGX-1Ġ1:47PM EDT - Now on stage: Raja Monga of Google's TensorFlow teamĠ1:47PM EDT - More AI/neural network examples coming upĠ1:41PM EDT - Baidu is using recurrent neural networks rather than convolutionalĠ1:40PM EDT - Now on stage: Brian of Catanzaro of BaiduĠ1:38PM EDT - "We achieved a 12x speed-up year-over-year" in deep learningĠ1:37PM EDT - Discussing the challenges in scaling out the number of nodes in many algorithmsĠ1:35PM EDT - Two Xeons, and 7TB in SSD capacityĠ1:35PM EDT - Quad Infiniband, Dual 10GBeĠ1:34PM EDT - 170TF FP16 in a box. 2200lbsĠ2:10PM EDT - Demonstrating DAVENET AI driivng software in actionĠ2:09PM EDT - It took BB8 some time to get halfway-decent at drivingĠ2:08PM EDT - So we're going to see BB-8 learn to driveĠ2:08PM EDT - "We've been working on a project that is really fun. All cars are PX2-poweredĠ2:12PM EDT - Autonomous race car. 02:14PM EDT - Off to see more of the showĠ2:13PM EDT - Recapping: SDKs, IRAY VR, Tesla P100, DGX-1, and autonomous carsĠ2:12PM EDT - Part of the 2016/2017 Formula E seasonĠ2:12PM EDT - Wil be participating in the Roborace.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |