It's likely that you've got a buffer overrun and you've changed from corrupting the stack to corrupting the heap & it just happens that it can tolerate the heap becoming corrupt.
Some explanation: "heap" is a Unix term for the memory your program can manage through functions such as malloc() and free(). By comparison, "stack" is storage space available to your program which does not need to be managed by malloc() and free(). You define local variables in your program's functions, and those come from the "stack".
You need to put it back so that it crashes and then dismantle the program so that it does less until it stops crashing. There are times when I've not been able to spot my dumb mistakes, so I just throw away the code and start again.
Easier said than done
It's possible that you might just repeat the same mistake again.
There is some value in trying to understand what the problem actually is, although at the time it will feel like some the worst spent minutes or hours of your life
In the 90's I knew someone who went days trying to figure out why he was getting corrupt memory and in the end he had mixed up some strcmp and strcpy calls.
Tell me about it... The 'C' runtime library design is as much a double-edged sword as the language itself. Scores of books have been written about how not to fall into the traps you unknowingly set for yourself by using 'C' (one of the first books being "'C' traps and pitfalls" by Andrew Koenig).
From my experience, the books do have value (e.g. Andrew Koenig's book details defensive programming measures which help you to cope with the side-effects of the language design which are unavoidable), but in the end remembering all the likely and some of the possible ways you should avoid is a burden.
What helps?
Learn how the 'C' language design may trip you up, because while you may be able to build a better set of functions and data structures for managing strings (for example), you cannot do the same with the 'C' language.
The 'C' programming examples and literature you are using to learn 'C' programming might be referencing programming practices which are long obsolete. It makes for more compact example code which is easier to understand, but that comes with a cost. To use functions such as sprintf(), strcpy(), strcat() or memcpy() in your program may seem the obvious choice, but they are not. All of these functions are unsafe to use because you have to be acutely aware of how much data they write to the destination: they don't know when to stop, or why to stop.
I rewrote my "term" application (
http://aminet.net/comm/term/term-main.lha) in around 1995 to use snprintf(), strlcpy(), strlcat(), memmove(), etc. It was quite the humbling experience to learn how these changes improved the overall stability of the program. Bugs came to light which were impossible to spot, because they were caused indirectly by buffer overruns and memory corruption.
Finally, it may make sense to write your own little library of functions which do in about what the 'C' runtime library takes care of, but which gives you more control and insight into how they are being used. For example, the 'C' runtime library performs little to no sanity checking on function parameters. You could write your own functions which does. The 'C' runtime library contains many functions whose parameter order is inconsistent (e.g. first parameter may be the input, or it may be the last parameter, or the second). You could write your own functions which are more consistent.
There are reasons for sticking with the 'C' runtime library, such as that it is likely to be very well-tested and almost free of bugs. But design decisions made in the 1970'ies, leading to lack of consistency and lack of sanity checking, are not the kind of "bugs" which a well-tested runtime library will resolve. You might be better off making your own choices.