This software is GPL 3 now, no need for registration in order to use it

In case of broken links or downloads, read the whole page products, chances are that you will find a supercede version of the software that you are looking with a different project name, as an example the MP3 OCX is now project rsppmp3 at


Safe C Programming

The C programming language code is only safe when developed following several security recommendations

The intend of this book is to teach about some mistakes that are common in C , and that cannot be directly caught by the compiler

( fortunately now the new (free?) MSVC compiler has options to catch many of these problems at design time or runtime )

As Kernighan and Ritchie says : "C retains the basic philosophy that programmers know what they are doing"

C is a real low level language , it only appears to be high level , but indeed , it is not

The proof of this is the possibility to write any kind of mistake as you want , like this:

char temp[100];
temp[200] = 1;

The language don't check the input , it just compile , is up to you to generate code in a consistent manner

This is exactly what have generated the large amount of software in the market today with several embedded bugs , that only the time will show

Learning about the problems that can be generated in C and how to avoid it is one of the most important things that developers need to learn

C is something that you start learning today but you never know when you will finish learning it , because you learn new things every day

This book expect a little knowledge of C , this is not a tutorial , if you are new to C , first learn the basics

Here we will only describe the problems and how to avoid it


Why C ?

C is the most powerful language , there is only one other language that can beat C in speed and features , Assembly , but it is low productive and  completely not portable and subject to bugs not easy to find and fix , this mean that code written in assembly will only run in an unique platform , and  a good optimizing compiler like mingw or msvc will generate as good assembly code as hand written assembly

C was developed around 1972 by Kernighan and Ritchie in the Bell Laboratories  , and since then it is the language of choice to write operating systems , compilers and other complex tasks

All the Unix operating system was created using C

C is fast , when very well used of course , his power come with a price , and the price can be high for developers that don't learn it well before generation of production code

If you have used Basic , Pascal , or other real high level language , you will not directly understand all the power of C

The power of C come from the possibility to generate code that don't execute nothing that is not in the code , so , if are not checking the array bounds in your code , it will not be checked at all

It will not add code that was not added by you , while other languages always  do checks without your knowledge

In VB while you are typing the code , the environment is at same time checking it , and all array access are checked for boundary errors , and it will stop execution as soon some access boundary error occurs

Because of these checks the code generated in other languages are so slow

If you want some kind of code check in C , you need to write it yourself

It is true that you can remove all kind of checks as soon your application is mature enough and very well tested , but we are not perfect , we are subject to errors , due to this misconception there are so many C code today with some kind of weird bug very difficult to track and fix

How many news in technology have you seen talking about buffer overrun exploits ?

Many come from very large corporations that are working with C based code for years , and it is difficult to believe that they have not added array boundary verification in their code after so many years working with the C language

C also don't check conversions for lose of data , like converting an int to a short , or a short to a char , and it don't check for integer overflows

Also it has some spartan pre definitions when passing information to functions without prototypes , in this case the arguments and return value are always int , so , if a function has a float or double value as argument or return value , it will quietly pass it as an int , generating an error in the called function that will be hard to debug and fix

And also people may argue that we have more powerful and elegant languages like Eiffel, Ada , C# , C++ , VB.Net , it is a good argument I agree , but for how many years these languages will be supported ?

Do code generated in these languages live forever ?

Did you know that the VB language generated by Microsoft is not more supported ?

They try to push people using VB6 to move to VB.Net , but there are tons of lines of VB5 and VB6 code that it will be totally impossible to translate it to VB.Net , so , the people will just lose all the large amount of code generated in VB5 or VB6 just because the company that have generated it just decided to stop supporting it

This will never occur with C , you always will find an usable C compliant compiler to reutilize your source code now , tomorrow and in a hundred years

C is not controlled by a unique company , and it may be modified in the future to support new standards

C compilers are always being updated , they are mature enough , the language is clean and simple , you only need to learn , and you learn today to use it to the rest of your life , without the risk of losing your investment in the language

Notice that we are using some C code that was generated in 1972 today without noticing it , a very well designed C code will live forever , think about it when deciding about what is your programming language of choice

And other languages like C# and Java are just descendants of C , and any C programmer is comfortable using these languages since the syntax is almost the same

With these ideas in mind we will try to discover better ways to avoid the problems of C in order to generate fast code in a safe manner


C compilers

This paper describe the problems that can be compiled and reproduced with mingw and msvc

These are by far the best compilers available today

mingw is free , it is the win32 version of the very well known GCC compiler , is very good and can be downloaded from here:

The msvc is also free but incomplete to create win32 applications with access to the windows apis and can be downloaded here:

The new msvc compiler has some new compiler options that can solve , or at least diminish the problems described in this paper

Anyway , we will not describe how to use it here , since it is not part of the C language , it is an extension added to the msvc C compiler , and our goal is to provide solutions that can be added to all the C compilers available in all platforms

And the free msvc C/C++ Toolkit compiler is incomplete to compile win32 applications , since it don't have the include files and libraries to access the Windows apis , if it is a problem to you then use the mingw compiler or purchase the MSVC compiler in the Visual Studio package

To enable all warnings when using mingw use the option -Wall

To enable all warnings when using msvc use the option /W3 or /W4

The enable of warnings during compilation is a very good programming practice , and can avoid lots of possible problems in your code


The compiler problem

The compiler cannot catch bugs at compile time or runtime , it can only be seen when it is too late ( when your application fail miserably )

For this reason you need to add at least in debug time of your application some kind of checks to avoid future bugs


Compiler assumptions

The compiler make some assumptions that can make it difficult to find problems

The prototype problem

Always prototype your functions , without it the compiler may generate the wrong assumptions about the arguments and return value , and worst , it cannot be directly detected at compile time or runtime


Variable initialization

Auto variables , variables that are inside a function without any other definition , are not initialized and may have garbage on it

This problem is a source of a large number of bugs

To avoid this problem , always initialize variables to a value

Take a look:

void test()
int myint;

In this case the myint variable can have any value on it , because it was not initialized

Now take a look:

void test()
int myint=0;

In this case the myint variable will be allocated and passed the value one to it

Both mingw and msvc can catch the utilization of auto variables without an initialization

Remember that static variables ( variables defined with a static modifier ) are initialized once , after then it will keep the last value passed to it, even between multiple calls to the test function

Like this:

void test()
static myint = 0;

In the first call to test the myint variable will be initialized to one ( really the variable is initialized before the start of the application ) , then it will increment by one , and after each call to test it will increment by one and keep their value until the end of execution of the application


Sequence of initialization of variables

Variables global ( outside of any function ) , have their value initialized to 0 , when the application start

Variables static , have the same behavior as global variables , then it has their value initialized to 0 , and it will keep their value until the end of execution of the application

Variables auto , it don't have any value defined in the initialization , so, you need to define a value for the initialization or use it as a l-value in an assignment


The = and == confusion

One major problem in C code is the possibility to use = when == is desired

I think that all the C developers was already bite by this bug

Take a look:

int test(int value)

  return 0;
  return 1;


It maybe is not what you are expecting , maybe you was testing the value of the variable against 0 , and what you want to use was:

int test(int value)

    return 0;
   return 1;


But the compiler will not explain to you that you are assigning 0 to the value variable instead testing the value of the variable against 0

The compiler will just compile , and in the future you will not understand where is the bug that is generating some unexpected result

To avoid this problem you have two solutions

First and more simple , use the constant before the variable in the test , like this:

int test(int value)

    return 0;
    return 1;


In this case the compiler will complain about the error trying to write to a constant value and that is not an l-value , but it cannot help if both testing values are variables and modifiable

Second and more complex , use a replacement to == with a #define like this:

#define _equal_    ==

int test(int value)

  if(value _equal_ 0)
    return 0;
    return 1;


In this case the _equal_ definition will expand to == during the compilation and will avoid the = with == mistake in tests , and it work with any kind of variables , then providing a good solution to avoid this nasty bug

You may think that you never will be affected by this problem , but believe me , you will , and worst , your users may be affected by this problem when using your code


The & and && confusion

The & is a bitwise assignment while the && is a conditional test

The expression ( 1 & 2 ) will return 0 , while ( 1 && 2 ) will return 1

Beginners may be bite by this problem


Precedence of operators

Maybe one of the most difficult to master in C is the precedence of operators

It may require a lot of time to be completely understood

If you are not comfortable with this information just put parentheses everywhere

It will not hurt your code and will help to quickly understand what you mean


Pointer arithmetic

A pointer value when incrementing or decrementing is based on the size of the pointer and not by the increment value

Take a look:

void test()
  int a;
  int *ptr= &a;

In this case the address in ptr is incremented by 4 , because the size of the ptr is int , the same occur with structure pointers , with an increment the address in the pointer will grow accordingly with the size of the structure

If you are not yet comfortable with pointers in C , just search for more information about pointers because it is a large source of bugs in C code


Absence of pointer validation in the C libraries

Did you know that the C libraries don't check for pointer validity ?

Try this code:

main ()

  char *temp = NULL;
  int ret;
  ret = strlen (temp);
  printf ("string size %d \n", ret);


After execution of this code your application will just crash , it occurs because the strlen C function don't check the validity of the pointer and just execute what you want

Remember : "C always think that you know what you are doing"

And since C don't check it , you need to check for it yourself , and this is exactly where the C programmers fail , they don't check things that need to be verified to avoid bugs

It includes , size of destination array sufficient to receive a copy of the source array , validity of allocated memory with malloc , validity of pointers that could be already deallocated , validity of the size of a destination value accordingly with the size of the source operand , overflows , limit ranges of all sorts , memory deallocation when a function finishes execution , stack overflows due to bad designed recursive functions and many more quirks

Just learning you will be able to avoid these mistakes


Use random data to test your functions

This is one of the latest tricks that I have learned

Input random data in your functions to see whether it can recover from wrong user input

If you have just finished the development of your functions , and you want to see how robust it is , just feed it with random data , and generate a log file to see what your functions return from this wrong user input

You will be amazed with the number of possible flaws that it may arise

Make the input go beyond the valid limits of your functions to see whether your functions are mature enough to understand that the input is invalid and recover from it


Array allocation on the stack

When you allocate an array in a function like this:

void test()

  char temp[255];


The temp array is allocated in the stack

The stack is a portion of memory that is shared between all your functions being called

It has a predefined size and cannot go beyond it

Now when you define the array in the heap like this :

void test()

static char temp[255];


The temp array is allocated in the heap , and it can have very large sizes , theoretically it can be larger than a few gigabytes

Allocating large arrays on the stack is not a good idea and may generate unexpected errors in some windows compilers including msvc 6.0 and above

If you need to generate large arrays create it on the heap or consider using malloc and free

int test()

  static char large[1000000] ;
/* correct , it will be generated on the heap */

  char large[1000000] ;
/* wrong , it will be generated on the stack and lead to unexpected bugs*/


I remember yet a few months ago when I have spend more than a week trying to understand where was the bug in the code when using a large array in the stack

As soon I replaced the stack array with a heap array the bug just vanished

It was confirmed a bug in the compiler and not in the code , and I almost go crazy trying to figure it out

I remember also another day when a code could not be compiled in optimized mode , in this case a compiler error was generated, as soon the optimization was disabled the code compiled perfectly , it was also another compiler bug

In the future if you cannot find the problem in the code for an unexpected bug just try another compiler , if it is a problem in the compiler you will not lose too much time trying to understand it , and remember , the compilers are not perfect

Another source of possible problems is the 'Assume No Aliasing' and 'Inline Function Expansion' , these options may generate wrong code when enabled

If you are having problems , just disable optimizations and see whether the problems also occur


String size problem

The strlen function is used to retrieve the size of a string , but the size reported don't include the terminating NULL character

So if you want to allocate a string to copy from another string , the size is strlen(string) + 1 , see ? , it is the size of the string PLUS 1

Always insert at least one byte more in all malloc allocation

Like this:

char input[] = "hello world";
char *ptr ;

ptr = malloc (strlen(input) + 1);
//<-- remembering this may avoid several problems


free (ptr);

It will avoid the infamous 'off-by-one' bug


Always check the correct deallocation on any malloc based code

Normal bugs

char *ptr;

ptr = malloc(100);
ptr ++;


Here the wrong position is being used to deallocate the memory because it was not the correct address used during allocation and may generate unexpected behavior , since the free function will not be able to really deallocate the memory , it is a common mistake and will generate memory leaks

When using free the argument need to be exactly the address received from malloc , and not adjacent addresses


The interesting with C is that some bugs in the code in some cases will not make the application crash immediately

Indeed , in some cases like this:

char *ptr;

ptr = malloc(1000);

strcpy(ptr, "hello world ");


strcpy(ptr,"me again");

The memory was deallocated , but the address are valid to store some information and it will not generate a memory access error , and the program will keep working without problems

The memory is not consistent to be used anymore , but it will not avoid someone accessing it , in read or write mode

Some mistakes like this can be avoided using the following code

ptr=NULL ;

In this case any future access to ptr as a pointer will generate a runtime error , so , it is a good programming practice to always set a pointer to NULL after the free function is called


The volatile variable utilization

When a function receive information coming as a pointer like this:

int test( int *ptr)


The ptr variable is considered a volatile variable , and volatile variables cannot be optimized

Volatile means to the code this : "This variable is volatile , then any access or modification need to be updated in the variable as soon as possible , and it cannot be optimized by the compiler"

This mean that the compiler will not generate code to keep the value in a register during the code execution

To avoid this problem replace the code with this

int test(int *ptr)

int iptr = *ptr;


*ptr = iptr;


In this case , now the value of *ptr is in the iptr variable , and it is local and can be optimized by the compiler to keep their value in a register , optimizing the speed execution

Remember , all values coming as a pointer in a function is considered volatile , and requires a local variable to hold this value until the end of the computation

If you don't believe on it create a small test using a pointer variable internally in the function , and after then replace it with the value of a local variable

Compile , test , and you will see that the method using the local variable run faster

Fortunately in the mingw compiler , values entering a function as a pointer are not considered volatile , unless specified as a volatile

Indeed , normal computation don't requires volatile variables, only embedded development requires this kind of variable , so , always avoid declaring a variable volatile, because it will slow down code execution


Absence of tests

Many C code developed are generated to execute a task

Ok it is exactly what is expected , but it is interesting the number of code that was generated to execute only exactly that task , and that will just fail miserably when wrong data is passed to it

I will use as an example the mpeglib that is part of the Lame encoder project

The mpeglib is based on mpeg123 mp3 decoding engine , it is used to decode mp3 , and work very well when decoding valid mpeg 1 layer 1 , 2 and 3 , but it just stop working and start generating unexpected memory errors when filled with invalid data

The developer have not tested it against random binary data , and as soon you send binary data to be decompressed it will just crash

Why ?

Because the developer have not generated the code in order to handle random binary data , and have not tested it

So , anyone interested in using mpeg123 , need to add code to handle wrong user input

The same occur with the Info-zip compression scheme , it is a very good zip compatible compression scheme , very well tested , but it just fail miserably when converted from an executable to a dll

Why ?

Because it has bugs in the memory allocation and deallocation that only can be seen in dll mode

In dll mode all memory allocated need to be freed in the termination of the execution

It don't occur correctly , so , memory leaks occur all the time the function is called

It will increase until the system become unusable

The problem don't appear in the executable mode because all the resources allocated are freed as soon the execution finish

But in the dll mode the resources are not freed and will only be freed when the application using the dll is finished , so , for each call to the function it will increase the amount of the memory used until the end of the execution of the application using the dll

This information will bring a very important point , "all software need to be tested in all possible ways to avoid unexpected problems"

All code need to be tested exhaustively in order to catch bugs at design time and not when the application is running in the end users machine

The sequence of tests are one the most important things that need to be done before the application hit the streets

Generating a very good sequence of tests to the software is a very good programming practice

Normally when a bug occurs , it will be reported by an end user , and believe me it is not that good to receive bug information about silly things that could be corrected if the correct sequence of tests was applied


About constant variables

Always hold a constant value as a numerical value and not in a variable , even constant

A numerical value is directly added to the code as a number

It is added directly to the code in numerical form

A code like this

ret = 100;

Is replaced in the executable code

mov eax , 100 ;


mov dword ptr [eax] , 100 ;

While a code like this

ret = a ;

Will expand to this

mov eax , ebx

Using numerical constants will be the faster way to execute the code , and it will free one register to hold other values


About tests

It is important to remember that a very well designed program never will generate something like memory corruption but we are not perfect , and we are subject to errors , what we need to minimize is the possibility of an error in the code to go undetected

If something is wrong , it is better to you to find it before your customers

This is why the checks in your code need to be done to avoid potential problems in the future


About array allocation

To allocate memory you only need to define it , and understand the range of the allocation , take a look:

char temp[10];

It is allocating 10 chars in the memory , it range from 0 to 9 , and not 10 , so this:


Is wrong! , the item 10 was not allocated , it is out of the range of the array

It range from 0 to 9 and not 10 , this is some mistakes that people new to C always do

You can allocate memory dynamically with malloc , like this :

char *temp;

temp = malloc(10);

It has allocated the same space memory as the char temp[10] , and it range from 0 to 9 in the same way

The difference is that with dynamic allocation you need to call free when the memory is not used anymore , like this:


If you don't do it, the memory will remain reserved until the end of the execution of the whole application , and it may cause memory leaks , and make your system slow if too much memory is allocated and not freed

If you want to be comfortable with the last position of the array being used or not , just allocate 2 or more positions on the array , it will make it certain that the array is large enough to hold the data that you want to pass to it


Securing the utilization of arrays

To avoid problems like buffer overrun you can use an end of array check

An end of array check is a final array point allocated and used to hold a predefined value that will be checked later during the code execution to test the consistency of the array

To use it , take a look on it

You will allocate a 10 characters array , and will add an additional position that will hold the security check value , like this:

char *temp;
temp = malloc(10);
char temp[10];

But we need to add one more position that is the check , then it become:

char *temp;

Now the position 10 in the array has a special value 254 , that will be tested during the code execution for consistency , in the case the value is not 254 , then in some manner the code has overwritten the boundaries of the array , and need to stop the execution

To test for it , add a reference to <assert.h> in the code and add the test code:


It will test the consistency of the utilization of the array , if the value is not 254 , the application will stop immediately reporting the line and source that have generated the error

If correctly used it can make buffer overrun and buffer exploits things of the past

If you don't know it , it is exactly what the new msvc compiler is doing to avoid buffer overflows in IE and Outlook Express , and apparently in part it is working , and all the new SP2 to Windows XP was compiled using verifications like this in the compilation

A simple trick that can solve many problems

In order to use the assert function in design time and runtime just do the following:

Add #include <assert.h> as the last include file in the source or header file, like this:

#include <stdio.h>
#include <math.h>
#include <unistd.h>
#include <limits.h>
#include <assert.h>

But you need to un define the symbol NDEBUG in case it is defined before , then modify it to:

#include <stdio.h>
#include <math.h>
#include <unistd.h>
#include <limits.h>
#undef NDEBUG
#include <assert.h>

Now the function assert will be able to test and call abort with a message in case some errors occur in your code

A buffer overrun is something that need to finish the execution of your code

You can remove the assert call from the code using #define NDEBUG before the declaration of assert.h , but in case you have not tested the code very well , the problem will go undetected and it will generate an unexpected error in the future , or wrong behavior of the application

The goal of this paper is to teach you about how to avoid the major problems and limitations of the C language , and it requires a few sacrifices , and checking the consistency of array boundaries is by far the most important of them

In the future all compilers will support automatic array boundaries check by design , by now the unique that has these features , and that is already giving good results apparently is the new msvc compiler , if you want to know more , learn about the new /GS compiler option , it is available at Microsoft , if you want it when using mingw you need to write it by hand

Anyway , it is always a very good programming practice to check the consistency of the array boundaries in the end of the utilization of the array , it can solve many possible problems , during an array boundaries fault , all the variables around the array may have their results modified what can cause unexpected results , not to say very hard to track and fix


What to do in case of errors ?

What is the best thing to do in case of an unexpected detected error ?

The answer will depend on what your code is doing

And most important is that the code was able to detect the error , since the majority of the errors go undetected due to bad error handling

Let we see some possibilities

Is it playing a MP3 ? , if yes , then cleanup everything , jump to a few bytes in the stream and start decompressing it again

Is it controlling the direction of a spaceship on lift off ? , if yes , then at least log everything that have occurred , try to inform any error handling about the problem and with as many information as possible , because this spaceship is really going to explode !!! , and people investigating it will need to discover why the error was not detected in the ground before launch ( search for 'Ariane 5 failure' on Google )

Whether your software can recover or not from an error will always depend on what is the software doing

For this reason is always important to define what is fatal and what is not

If something is fatal , then there is no way to recover , because software cannot fix what is outside of the control of the software

But if the error is a minor problem , then you can handle it perfectly , but in one way or another your software need to be able to detect the anomaly and report it to the user

A good example of error not being detected are these buffer overflows reported in the software news

A buffer overflow is an error not even detected , possible due to bad programming practices

If someone is being able to write to a buffer over the bounds of an array is because the developer have not added boundary verification on arrays that are modifiable by user input

A well developed software would be able to detect , avoid , and inform about a wrong user input , and even keep running without crash

It appear to be easy but it is not

First because the majority of the software being used today was created a few years ago when exploits was not occurring and not even being discussed in the media , and these codes will only be modified to avoid it as soon someone discover the flaws , if not , then it will not be fixed

Second because a programmer that was coding in a wrong manner for years will not change their mind over night , a lot of time will pass until some change occur in the way a programmer is used to work

Third because it is almost impossible to remember how all the functions being reviewed work , and it is very difficult to search for the problems that may occur but that have not occurred yet , this remember me a speak in NASA about fixes in the rovers in Mars , something like this "We have found that bug in the software , now the software is perfect again" , well correct me if I am wrong but if the software was perfect before , then it would not have that bug !!! , if you want to read the paper about it search for rover or Spirit in 

The truth is that the fixes will only appear as soon someone find the exploits , and without the exploits the software is perfect ( theoretically ) , or at least this is what the owner of the code is thinking

The common sense is that the software don't have a flaw , until someone find a flaw or until the application crash

I think that this is the reason that each week a new flaw appear talking about a buffer overrun exploit in a very known software

And notice that these applications with exploits are generated by the corporations that are working with development for ages , from who you would expect a little more knowledge about flaws and the way to avoid it


The importance of a sequence of tests in the real world

Explaining using examples is the best way to show an important issue

As an example I will talk about a bug that I have found in a mp3/wav player that I have generated

The player setup the wave apis based on data coming from the header of the wave file

As soon it is ready it initiate the playback and everything was ok

After tests and more tests I saw that the player was ok , then I did a final check on the code as usual and spend a few minutes trying to catch some bug

Nothing was found , then I decided to generate a function that will simulate wav data in some files and send it to the player to see whether it could play or not the modified wav files

Well , as soon it started the playback of the first wav it just crash with an access violation

Before explaining about what occurred I want to show the importance of this

I have spend several minutes looking the code and I was not able to understand what could occur that could lead to a memory corruption , so , what the heck!!! , why is it occurring ?

If I have checked , then why I was not able to detect the problem ?

This is the major problem with programming today , people think that just because he could not find a problem in the code , nobody will

But it always is proved wrong , because always someone will find a problem that the developer have not found during development

So , let me continue

I enabled the normal debug procedures to see where it was occurring

Quickly I found that the data buffer was reading to much data and overwriting a destination buffer that have a limited space on it

Verifying the values I saw that the size of the copy was too large due to an unexpected value in a variable coming for the header of the wav file , and since the wav file was generated using random invalid data , it was easy to see that a validation of the range of the data was required

Then a validation of the data was added , and if the data is out of range an 'Invalid wav file' error was reported and the application stop execution

It was added in all the values coming from the header , so the player will only work if all the value are inside the range limits

As soon the modification was done , the player was able to report errors and recover from all the random generated wav files without any possible problem

Why the problem occurred ?

Because I have not added validation on user input

But why I have not added validation of user input , since I know that user input can be out of the range limits of any software ?

This is a good question that I am trying to respond

The unique solution that I can provide to these problems is : "If something is coming from outside and entering your functions you need to validate the data before execution of the code , otherwise your software will not be reliable"

And as someone already said "If something can go wrong , it will go wrong"

Even if your code is able to work very well and flawless with correct user input , it don't mean that your software is reliable

Remember and write it down , your software can only be considered reliable as soon you can prove that it can handle valid data and invalid data in the same way and recover and report errors and never reach a point that was totally unexpected during development

An unexpected event never can occur in a reliable software

You need to handle the unexpected in the same way you handle valid data

You always need to test for valid and invalid information

Your software need to follow two steps during development

1 - The addition of code to handle valid data

2 - The addition of code to handle invalid data

Another interesting point about it is not only adding code to check the validity of the input but also testing it

Just because you have added the code to check the validity of the input don't mean that it is working

Generate some tests with wrong input data to see whether the software can detect and recover from it

Tests are validation of the code , only the tests can show whether it is correct

Believe me , if you generate the tests , make it work , check everything and one of your users report a flaw discovered in the software , this flaw will be something that was missing in your tests , correct ?

Then what is the most important , to write a good code or write a good sequence of tests ?

The secret of computer programming is in the sequence of tests to validate your code


More to come in the future

This is an attempt to help the C programming community to avoid mistakes and fix problems , and our wish to share the knowledge that we have gathered during these years working with the C language

We have entered the C language development a few years ago in order to speed up the execution of our code originally written in VB , since then we have adopted C as our primary language and we are quite happy with the power of C

Ricardo Santos Pereira
RSP Software


vbwire.gif (8247 bytes)



Home   Contact   About   Development   C Programming  

Processor Research   Products   License   Mirrors