c++ - How to safely offset bits without undefined behaviour? -


i'm writting function convert bitset int/uint value considering bitset have fewer bits target type.

here function wrote:

template <typename t,size_t count> static t convertbitsettonumber( const std::bitset<count>& bitset ) {     t result;     #define targetsize (sizeof( t )*char_bit)     if ( targetsize > count )     {         // if bitset 0xf00, converting 0x0f00 lose sign information (0xf00 negative, while 0x0f00 positive)         // because sign bit on left.         // then, need add 0 (4bits) on right , convert 0xf000, later, divide 16 (2^4) preserve sign , value          size_t missingbits = targetsize - count;          std::bitset<targetsize> extended;         extended.reset(); // set 0         ( size_t = 0; != count; ++i )         {             if ( < count )                 extended[i+missingbits] = bitset[i];         }          result = static_cast<t>( extended.to_ullong() );          result = result >> missingbits;          return result;     }     else     {         return static_cast<t>( bitset.to_ullong() );     } } 

and "test program":

uint16_t val1 = base::bitsetutl::convertbitsettonumber<uint16_t,12>( std::bitset<12>( "100010011010" ) ); // val1 0x089a int16_t val2 = base::bitsetutl::convertbitsettonumber<int16_t,12>( std::bitset<12>( "100010011010" ) ); // val2 0xf89a 

note: see comment/exchange ped7g, code above right , preserves bit sign , 12->16bits conversion right signed or unsigned bits. if looking on how offset 0xabc0 0x0abc on signed object, answers you, don't delete question.

see program works when using uint16 target type, as:

uint16_t val = 0x89a0; // 1000100110100000 val = val >> 4;        // 0000100010011010 

however, fails when using int16_t, because 0x89a0 >> 4 0xf89a instead of expected 0x089a.

int16_t val = 0x89a0; // 1000100110100000 val = val >> 4;       // 1111100010011010 

i don't understand why >> operator insert 0 , 1. , can't find out how safely final operation of function (result = result >> missingbits; must wrong @ point...)

it's because shifting arithmetic operation, , promotes operands int, sign extension.

i.e. promoting signed 16-bit integer (int16_t) 0x89a0 32-bit signed integer (int) causes value become 0xffff89a0, value shifted.

see e.g. this arithmetic operation conversion reference more information.

you should cast variable (or value) unsigned integer (i.e. uint16_t in case):

val = static_cast<uint16_t>(val) >> 4; 

if type not know, if it's template argument, can use std::make_unsigned:

val = static_cast<typename std::make_unsigned<t>::type>(val) >> 4; 

Comments

Popular posts from this blog

unity3d - Rotate an object to face an opposite direction -

angular - Is it possible to get native element for formControl? -

javascript - Why jQuery Select box change event is now working? -