Browse Wiki & Semantic Web

Jump to: navigation, search
Http://dbpedia.org/resource/Bfloat16 floating-point format
  This page has no properties.
hide properties that link here 
  No properties link to this page.
 
http://dbpedia.org/resource/Bfloat16_floating-point_format
http://dbpedia.org/ontology/abstract bfloat16 (brain floating point with 16 bitbfloat16 (brain floating point with 16 bits) ist die Bezeichnung für ein Gleitkommaformat in Computersystemen. Es handelt sich um ein binäres Datenformat mit einem Bit für das Vorzeichen, 8 Bits für den Exponenten und 7 Bits für die Mantisse. Es handelt sich also um eine in der Mantisse gekürzte Version des IEEE 754 single-Datentyps. bfloat16 wird insbesondere in Systemen für maschinelles Lernen eingesetzt, wie beispielsweise TPUs, sowie bestimmten Intel-Xeon-Prozessoren und Intel FPGAs.en Intel-Xeon-Prozessoren und Intel FPGAs. , V oboru informatiky je bfloat16 (brain floV oboru informatiky je bfloat16 (brain floating point) označení konkrétního způsobu reprezentace čísel v počítači pomocí pohyblivé řádové řárky. Jedná se o formát založený na dvojkové soustavě, kde vyjadřuje , dalších 8 bitů vyjadřuje exponent a posledních 7 bitů vyjadřuje . Jedná se v podstatě o variantu dvaatřicetibitového datového typu single definovaného standardem IEEE 754. Byl zaveden zejména pro podporu strojového učení.eden zejména pro podporu strojového učení. , The bfloat16 (Brain Floating Point) floatiThe bfloat16 (Brain Floating Point) floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms. The bfloat16 format was developed by Google Brain, an artificial intelligence research group at Google. The bfloat16 format is utilized in Intel AI processors, such as Nervana NNP-L1000, Xeon processors (AVX-512 BF16 extensions), and Intel FPGAs, Google Cloud TPUs, and TensorFlow. ARMv8.6-A, AMD ROCm, and CUDA also support the bfloat16 format. On these platforms, bfloat16 may also be used in mixed-precision arithmetic, where bfloat16 numbers may be operated on and expanded to wider data types.rated on and expanded to wider data types.
http://dbpedia.org/ontology/wikiPageID 57499027
http://dbpedia.org/ontology/wikiPageLength 29531
http://dbpedia.org/ontology/wikiPageRevisionID 1124029922
http://dbpedia.org/ontology/wikiPageWikiLink http://dbpedia.org/resource/Binary_number + , http://dbpedia.org/resource/ISO/IEC_10967 + , http://dbpedia.org/resource/Half-precision_floating-point_format + , http://dbpedia.org/resource/AI_accelerator + , http://dbpedia.org/resource/Single-precision_floating-point_format + , http://dbpedia.org/resource/Minifloat + , http://dbpedia.org/resource/Tensor_processing_unit + , http://dbpedia.org/resource/FPGA + , http://dbpedia.org/resource/Category:Binary_arithmetic + , http://dbpedia.org/resource/Precision_%28arithmetic%29 + , http://dbpedia.org/resource/TensorFlow + , http://dbpedia.org/resource/Type_conversion + , http://dbpedia.org/resource/AMD + , http://dbpedia.org/resource/16-bit + , http://dbpedia.org/resource/Google_Brain + , http://dbpedia.org/resource/Primitive_data_type + , http://dbpedia.org/resource/Xeon + , http://dbpedia.org/resource/Hexadecimal + , http://dbpedia.org/resource/Significand + , http://dbpedia.org/resource/Exponent + , http://dbpedia.org/resource/0_%28number%29 + , http://dbpedia.org/resource/Nervana_Systems + , http://dbpedia.org/resource/Intelligent_sensor + , http://dbpedia.org/resource/%E2%88%920 + , http://dbpedia.org/resource/Exponent_bias + , http://dbpedia.org/resource/OpenCL + , http://dbpedia.org/resource/Hardware_acceleration + , http://dbpedia.org/resource/AVX-512 + , http://dbpedia.org/resource/Subnormal_number + , http://dbpedia.org/resource/Floating_point + , http://dbpedia.org/resource/ARM_architecture + , http://dbpedia.org/resource/Computer_memory + , http://dbpedia.org/resource/Infinity + , http://dbpedia.org/resource/Dynamic_range + , http://dbpedia.org/resource/NaN + , http://dbpedia.org/resource/Machine_learning + , http://dbpedia.org/resource/IEEE_754 + , http://dbpedia.org/resource/CUDA + , http://dbpedia.org/resource/Computer_number_format + , http://dbpedia.org/resource/Single_precision + , http://dbpedia.org/resource/Offset_binary + , http://dbpedia.org/resource/Sign_bit + , http://dbpedia.org/resource/Mixed-precision_arithmetic + , http://dbpedia.org/resource/Category:Floating_point_types +
http://dbpedia.org/property/wikiPageUsesTemplate http://dbpedia.org/resource/Template:Floating-point + , http://dbpedia.org/resource/Template:Data_types + , http://dbpedia.org/resource/Template:Lowercase_title + , http://dbpedia.org/resource/Template:Confuse + , http://dbpedia.org/resource/Template:Reflist + , http://dbpedia.org/resource/Template:Short_description + , http://dbpedia.org/resource/Template:Legend +
http://purl.org/dc/terms/subject http://dbpedia.org/resource/Category:Binary_arithmetic + , http://dbpedia.org/resource/Category:Floating_point_types +
http://www.w3.org/ns/prov#wasDerivedFrom http://en.wikipedia.org/wiki/Bfloat16_floating-point_format?oldid=1124029922&ns=0 +
http://xmlns.com/foaf/0.1/isPrimaryTopicOf http://en.wikipedia.org/wiki/Bfloat16_floating-point_format +
owl:sameAs http://www.wikidata.org/entity/Q54083815 + , http://dbpedia.org/resource/Bfloat16_floating-point_format + , http://de.dbpedia.org/resource/Bfloat16 + , https://global.dbpedia.org/id/6VPZU + , http://cs.dbpedia.org/resource/Bfloat16 +
rdfs:comment bfloat16 (brain floating point with 16 bitbfloat16 (brain floating point with 16 bits) ist die Bezeichnung für ein Gleitkommaformat in Computersystemen. Es handelt sich um ein binäres Datenformat mit einem Bit für das Vorzeichen, 8 Bits für den Exponenten und 7 Bits für die Mantisse. Es handelt sich also um eine in der Mantisse gekürzte Version des IEEE 754 single-Datentyps. bfloat16 wird insbesondere in Systemen für maschinelles Lernen eingesetzt, wie beispielsweise TPUs, sowie bestimmten Intel-Xeon-Prozessoren und Intel FPGAs.en Intel-Xeon-Prozessoren und Intel FPGAs. , V oboru informatiky je bfloat16 (brain floV oboru informatiky je bfloat16 (brain floating point) označení konkrétního způsobu reprezentace čísel v počítači pomocí pohyblivé řádové řárky. Jedná se o formát založený na dvojkové soustavě, kde vyjadřuje , dalších 8 bitů vyjadřuje exponent a posledních 7 bitů vyjadřuje . Jedná se v podstatě o variantu dvaatřicetibitového datového typu single definovaného standardem IEEE 754. Byl zaveden zejména pro podporu strojového učení.eden zejména pro podporu strojového učení. , The bfloat16 (Brain Floating Point) floatiThe bfloat16 (Brain Floating Point) floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage r. Bfloat16 is used to reduce the storage r
rdfs:label Bfloat16 , Bfloat16 floating-point format
hide properties that link here 
http://dbpedia.org/resource/Bfloat16 + , http://dbpedia.org/resource/BF16 + , http://dbpedia.org/resource/Bf16 + http://dbpedia.org/ontology/wikiPageRedirects
http://dbpedia.org/resource/AVX-512 + , http://dbpedia.org/resource/Floating-point_arithmetic + , http://dbpedia.org/resource/DL_Boost + , http://dbpedia.org/resource/Power10 + , http://dbpedia.org/resource/Ampere_%28microarchitecture%29 + , http://dbpedia.org/resource/List_of_Intel_CPU_microarchitectures + , http://dbpedia.org/resource/Advanced_Vector_Extensions + , http://dbpedia.org/resource/Advanced_Matrix_Extensions + , http://dbpedia.org/resource/Mixed-precision_arithmetic + , http://dbpedia.org/resource/CPUID + , http://dbpedia.org/resource/Minifloat + , http://dbpedia.org/resource/Half-precision_floating-point_format + , http://dbpedia.org/resource/IEEE_754 + , http://dbpedia.org/resource/AArch64 + , http://dbpedia.org/resource/AI_accelerator + , http://dbpedia.org/resource/Bfloat16 + , http://dbpedia.org/resource/BF16 + , http://dbpedia.org/resource/Bf16 + , http://dbpedia.org/resource/Brain_floating-point_format + http://dbpedia.org/ontology/wikiPageWikiLink
http://en.wikipedia.org/wiki/Bfloat16_floating-point_format + http://xmlns.com/foaf/0.1/primaryTopic
http://dbpedia.org/resource/Bfloat16_floating-point_format + owl:sameAs
 

 

Enter the name of the page to start semantic browsing from.