In this part, you will write a program that converts an unsigned value from one base to another base. The program should prompt the user for the initial base and then the representation of the value in that base, and then the program should ask for the desired base and print the new representation (in that desired base) of the value that was given earlier.
Assume that both bases entered by the user will be between 1 and 16 (inclusive).
For bases above 9, digits for 10 and above will be needed. In this case, like with hexadecimal, the uppercase letters A through F will be used.
You do not need to perform input validation.
Below are examples of how your program should behave on a Linux command line. I first start by showing examples of how to use the makefile I provide (including how to toggle the -g flag), in case you wish to use it; it is provided on Canvas.
You don’t have to use it.
$ ./base_converter Enter initial base: 2 Enter base -2 representation: 10110 Entered desired base: 10 Base -10 representation: 22 $ ./base_converter Enter initial base: 10 Enter base -10 representation: 22 Entered desired base: 2 Base -2 representation: 10110 $ ./base_converter Enter initial base: 16 Enter base -16 representation: 59 Entered desired base: 5 Base -5 representation: 324 $ ./base_converter Enter initial base: 13 Enter base -13 representation: BB Entered desired base: 10 Base -10 representation: 154 $ ./base_converter Enter initial base: 10 Enter base -10 representation: 12090 Entered desired base: 16 Base -16 representation: 2F3A $
In this part, you will write a program that takes an integer as a command-line argument and prints the two’s complement and signed magnitude representations of the integer. Assume that an integer is represented with 32 bits (as is the case for int on the CSIF, the reference environment) and that the given integer can be represented (with both representations) with 32 bits.
The program should print an error message to standard error (not standard output) and return 1 if the user provides the wrong number of command-line arguments.
In this part, you will write a program that parses a bit string to determine the corresponding floating-point value, according to a given format. Write a program that takes as its only command-line argument the name of a file whose contents describe the floating-point format. This file will specify the order of the sign, exponent, and mantissa fields. This file will also specify the sizes (in bits) of the exponent and mantissa fields. Below are examples of such files (all of which are provided on Canvas).
Your program should prompt the user to enter the bit string. If the user enters a bit string with the wrong number of bits, then the program should print an error message (to standard output) and keep prompting the user until they enter an appropriate bit string. (This is the only input validation that needs to be done.) Once the user enters an acceptable bit string, the program should interpret the bit string as a floating-point value (according to the format outlined in the given file) and print out that value.
As will be the case with the floating-point formats we will discuss, the exponent is stored with a bias. That bias is always 2^n-1 - 1, where n is the number of bits used for the exponent. For example, if the exponent is stored as 001112 and if n = 5 bits are used for the exponent, then the bias is 2^4-1 = 15, so the exponent will be 00111(2) + 15(10) = 7(10) + 15(10) = 22(10). Moreover, there will always be an implicit leading 1 with the mantissa.
Unless you mess with the precision used by printf() or std::cout (whichever you prefer) when printing, floating-point error should not be an issue. I will avoid autograder test cases that would cause issues with the default precision used in printing.
Below are examples of how your program should behave on a Linux command line.
$ make D =1 g++ -Wall -Werror -std=c++11 -g decode_float.cpp -o decode_float $ ./decode_float test_format1.txt Enter bit string: 101010 Wrong number of bits. Enter bit string: 10101010 Wrong number of bits. Enter bit string: 1111000 Value: -3.5 $ ./decode_float test_format1.txt Enter bit string: 0000100 Value: 0.125 $ ./decode_float test_format2.txt Enter bit string: 11111110 Value: 31 $