More on boundary testing and Mp3 encodings

My previous post refuting a conjecture by Pradeep Soundarajan suggesting there are no boundary values in software was a bit harshly worded, and to him and the readers I apologize. Occasionally I get a little overzealous. I am sure Pradeep is a great guy, and I must say his reply to me on his blog was rather cordial given the situation. As I told Pradeep, email and blogs are a poor medium for expressing emotion. MichaelB cautioned me about this before, but my Type A personality sometimes takes over, and it is something I need to work on.

 Anyway, although my critique of Pradeep's conjecture was pretty ruthless, the analysis was accurate and the  boundary values Pradeep suggested for testing an Mp3 file are in fact not possible or probable even using the tools he referenced. I am not an expert on Mp3 encodings or decoding technology, but i did know that Mp3 files used standard encoding bit rate formats. A little investigation quickly revealed the bit rates for audio encoding are based on multiples of 8, and the first 32-bits of an Mp3 file contain header information including 4 bits to specify the bit rate index (Layer 1, Layer 2, or Layer 3) and the bit rates as outlined below. (Thanks to Wkikpedia some of the specific data I used in my initial rebuttal was incorrect, and I put a single strikethrough that part of the sentence.)

Layer 1 - 32, 64, 96, 128, 160, 192, 224, 256, 288, 320, 352, 384, 416, 448
Layer 2 - 32, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384
Layer 3 - 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320

In his attempt to contest my argument Pradeep said, "Pooh! I am not sure why you don't know that bit rates can be 33, 41, 57 or any number you want to generate. I recommend you to go through some multimedia test content generation tools like ffmpeg which gives a multimedia tester an edge to generate test content of his choice. "

So, not being an expert I took Pradeep's suggestion "For those who don't know tools like ffmpeg, it is impossible. I suggest you explore the boundary of your education on multimedia. " and went home and increased my understanding of Mp3 encodings and the ffmpeg toolset. As I read through some of the API references I found an interesting struct (illustrated below) for bit rate constants. (Now, I am thinking to myself...this is a clue! I am also thinking...hmmm...there might be some real boundary values here!)

static const int sBitRates[2][3][15] = {
00085 { { 0, 32, 64, 96,128,160,192,224,256,288,320,352,384,416,448},
00086 { 0, 32, 48, 56, 64, 80, 96,112,128,160,192,224,256,320,384},
00087 { 0, 32, 40, 48, 56, 64, 80, 96,112,128,160,192,224,256,320}
00088 },
00089 { { 0, 32, 48, 56, 64, 80, 96,112,128,144,160,176,192,224,256},
00090 { 0, 8, 16, 24, 32, 40, 48, 56, 64, 80, 96,112,128,144,160},
00091 { 0, 8, 16, 24, 32, 40, 48, 56, 64, 80, 96,112,128,144,160}
00092 },

I also followed up by asking a co-worker who frequently works with ffmpeg to try to encode a Mp3 file with a bit rate of 57 kb/s. Interestingly enough, when he issued the command line parameters we got an error message indicating "Invalid Value." (I can get a snapshot of the command window, but I really don't think that is necessary.)

I am still not an expert on Mp3 encodings, but I am fairly certain that Mp3 file decoding algorithms are standardized across the industry. So, let's just assume for a moment that we can encode a Mp3 file at 57 kb/s, and that file fails to play. Does it really matter? No, because industry hardware simply doesn't support that encoding, and as long as the Mp3 player didn't burst into flames there is no business case that would compel someone to try to make it work (at this time)? (I am not suggesting that we only test only "real-world" scenarios here, but I am suggesting that in-depth domain and system knowledge goes a long way in increasing the efficiency and effectiveness of our testing (and can lead to better identification boundary values)).

Now, perhaps I am still missing something, and as I expressed previously I am not an expert on Mp3 encodings, or the use of ffmpeg or other tools to encode Mp3 files. So, I have asked Pradeep to share his knowledge with me in this area and teach me how to encode an Mp3 file with a bit rate of 57 kb/s using a commonly used tool such as ffmpeg (there is no doubt someone can write a customized algorithm to do this), and to also let me know of a commercially available Mp3 player that will decode and play that file. (Because if it can be done I would like to learn how simply because I love to learn new things.)

Many people do assume that boundary testing is quite simple. The actual execution of boundary tests are in fact rather simple; however, discerning the boundary values in any complex software is not as simple as looking at some minimum and maximum values and trying one value below and above each boundary condition. Boundary testing is a systematic procedure to solve a specific type of complex problem (specifically the incorrect usage of data types or constant values, artifically constrained data types, and relational operators). Boundary value analysis doesn't solve all problems, it is not the holy grail, and its efficacy relies on the testers ability to understand and decompose the data set effectively.The less the tester knows about the data and how the data is used by the program, the less effective they will be in the application of this technique.

I did not intend my previous post to be construed as a personal attack against Pradeep; I am sure he is a bright guy. But, I am challenging his assertion on boundary testing on its technical merit. I hope he replies here (or on his blog) with an example of how to encode an Mp3 file at 57 kb/s, and I will make sure it is posted (or linked) here because I am certainly curious. (I don't really like the taste of humble pie, but I will eat it from time to time if it helps me learn.)

Comments

  • Anonymous
    March 06, 2007
    The comment has been removed

  • Anonymous
    March 06, 2007
    The comment has been removed

  • Anonymous
    March 07, 2007
    Hi Shrini, Actually, I don't have any 'beliefs' about boundary testing. Boundary testing is an analytical process based on rational thought. But, no, I have not changed my position on the application or value of using boundary value analysis as a technique to identify specific classes of defects. The more we know, the greater its effectiveness. As I said, I know that boundary testing is not a panacea for everything, but it is a very useful tool, and is very good at doing the job it was designed to do. (I really hate to repeat myself, but sometimes people only read what they want to read, or misinterpret the words I use.) So, I have said all along, there are often multiple boundaries between the physical ranges of primitive data types and I specifically , and they are sometimes very hard to find withuot looking at the code or in-depth domain and system level knowledge. I specifically said "Also, if I artificially constrain an int in a predicate statement using a relational operator such as if (intValue <= 0) then there is another boundary condition that I would certainly want to analyze" and "testers must be aware that boundary values don't always exist only at the extreme ranges of data types or other variables. Occasionally, there are boundary values/conditions within the minimum and maximum physical ranges of a variable." I am trying to pass on some useful information here, so if any of this is unclear please let me know. But, perhaps I am missing the intent of your question, or the point you are tyring to make (and I sure hope that point is not suggesting the upper boundary for a 32 bit unsigned int can be greater than 4294967295). If so, can you please rephrase your question concisely?

  • Anonymous
    March 07, 2007
    The comment has been removed

  • Anonymous
    March 08, 2007
    The comment has been removed

  • Anonymous
    March 09, 2007
    The comment has been removed

  • Anonymous
    March 09, 2007
    It is mathematically impossible for 2 raised to the 32nd power to be greater than 4294967295 (4294967296 - 1 because we start counting at zero). Therefore, if the developer declares a variable with an unsigned 32-bit integer type (which is an integral data type) and the user enters a value > 4294967295 then an overflow exception will be thrown (and hopefully the developer has error code to deal with that situation. Now, if you can enter a value of 5,000,000,000 and the application does not throw an exception then the developer has not declared the variable as a 32-bit unsigned integer, but (mostly likely) as a signed or unsigned 64-bit integer data type. If the developer chooses to artificially constrain that 64-bit integer to 5000000000 based on some requirements, then it is most likely that will remain fixed because I don't know of too many developers who arbitrarily change an established boundary value in the code very frequently. In fact, in well-formed code the value used to contrain the size of an integral data type is typically declared as constant value and not hard-coded into a conditional statement. Curiosity is good, and curiosity often helps us discover new and interesting things. However, between 4294967295 (max value for unsigned 32 bit integer) and 9,223,372,036,854,775,807 (max value for a signed 64 bit integer) there are 9,223,372,032,559,808,512 integer values. Personally, I don't want to rely on hunches or best guess to see if the developer has artifically constrained variable declared as a 64 bit integer. I will find the specific value from the requirements, code, developer, etc. and take it for a boudnary test ride. Once I do that, if I want to poke around at random values to see if anything interesting happens, then that my friend is called equivalence class testing. Karen Johnson gives some pretty good examples of specific boundary values in an earlier post. She states, she doesn't think about the numbers technically, but the fact is there is a technical basis why some of the numbers she refers to are extremely valuable to know and why they are more interesting than other numbers. For example, she discusses the value 2038 in the context of date input. The link in her post does an excellent job of explaining the details of the problem. However, on Jan 19, 2038 at precisely 03:14:07 the second count will reach the max size of a signed 32-bit integer value and the second count will wrap around to -2147483648. Not only is this particular issue interesting in date fields (and in fact has mostly been corrected), but any place in the program which uses time as an operand to perform internal calculations should be suspect.