Sunday, November 23, 2014

Winter is coming

转帖者曰:原文已经消失,就像从未存在过。发到这里,权作纪念

几年以前,我曾经嘲笑过某科技界大佬。当时他说:也许90后、95后会慢慢不知道谷歌是什么网站。

       那一年,这对于我来说简直就是世界上最好笑的笑话。谷歌,全世界最卓越的互联网公司,活在互联网的一代中国人,会不知道他们的网站?

       今天,我收回这句嘲笑。因为这件不可能的事,它慢慢变成了现实。

       没有人再关注什么谷歌不谷歌。对他们来说,百度也蛮好用的,反正他们几乎没用过谷歌。没有谷歌又怎样?大家还是开心的刷微博,看微信,听歌,看娱乐节目。对于从来就不知道谷歌的人来说,少了谷歌又有什么影响?

       多年前,我们也是可以登陆Facebook的。其实这个网站和校内一样,也挺蠢的。可在上面你能看到老外们的生活,可以轻易的跟一万公里以外的人互相拜访,可以看到很多根本不会开到校内上的主页。你用汉语回复,下面给你聊起来的可能是香港仔,可能是台湾人。你用英语回复,说不定有比你英语用的更蹩脚的寂寞的北欧人来跟你搭讪。你感觉地球真的变成了地球村,你还没拉门走出去,别人就推门走了进来。

       然后,它就没有了。起初,它的失踪激起了很大的声音,后来,声音就消失了。

       多年前,我们也是可以登陆Twitter的。其实这个网站和微博一样,也不过是些信息流,刷上一整天,也不见得有什么用处。但至少,你可以以最快速度获取你想知道的任何新事,你会真正了解什么事情在全世界是流行的,而不是经过各种截图、翻译、转发,甚至曲解、断章取义、黑白颠倒的东西。你知道的是真相,赤裸裸的,也许有点太短的真相。但至少中间不会有无数人的加工与再加工,偏激、片面,就在这个过程中产生了,不管后来者有意还是无意。

       然后,它就没有了。首先是它的本体没有了,然后它的模仿者也没有了,模仿者的模仿者也没有了。只剩一个模仿者的模仿者的模仿者,现在你每天能在上面看到无数广告。

       多年前,我们也是可以登陆YouTube的。对于有的人来说,这个网站就是个大型优酷,当年有人信誓旦旦的说,没有YouTube,我们中国人会很快让优酷超过YouTube。可这么多年过去了,视频还是那么卡,内容还是那么垃圾,原创还是那么容易被盗窃,视频丰富度还是那么的可怜。在YouTube上,你能看到全世界最棒的手艺人,最逗乐的笑话,最天马行空的创意,最激荡人心的音乐,最美好的完美瞬间,可在优酷上,你想看一分钟视频,请先看半分钟广告。

       哦,对了。Instagram,有些人可能感觉它和QQ空间也差不多。可我在上面关注了六百多个摄影师,它们都是顶好顶好的影像记录者,每天看他们的作品,我感觉到很幸福,那种即使没有到那里去,也身临其境的幸福。我还在上面认识了一个日本的爱自拍的帅小伙,一个爱喝酒的韩国大叔,一个十年前到过中国今天会在每张我发的紫禁城照片下点赞的美国大爷,一个美丽无比的俄罗斯妹子,我和他们基本上都难以交流,语言是很大的障碍,但几个简单的单词,心意也就到了,这种感觉,有时候比多年老友相聚还兴奋。因为这是人类不同族群自由交流互相沟通的过程,这种过程很神奇,真的很神奇。

       可现在,它没有了,它之所以没有就因为在某个特定的时间你在搜索特定的词汇时,会搜出来特定的照片。虽然这么搜的人并不多,虽然看到的人也不会大惊小怪,也不会觉得天黑了,天亮了,天要塌了,天要变了。可它就是没了,Instagram,就这么没了。谷歌也是这么没的,Twitter也是这么没的,Facebook也是这么没的。不知道是什么人,在什么场合,说了什么话,下了什么决定。就要有超过十亿人像陷于哥谭市的孤岛里一样,看着一座又一座桥梁被炸掉,又被炸掉,又被炸掉,然后,就什么都没了。

       我时常觉得悲哀,真的好悲哀,一个我根本不认识也不知道是谁的人,也许是一个群体,在不断抢走我身边的东西,而我却无能为力。我抱怨一声,他听不到,任何人都听不到。我怒吼一句,身边的大多数人却像看疯子一样的看着我。我哀嚎一声,这声音被阻碍在黑黑的幕墙以里。我发出尖锐的嘶吼,这声音传不了多远,就和我那被抢走的东西一样,消失了,不见了,就像从来没存在过一样。

       对于本来就没存在过的东西,有谁又会觉得在意呢?那些本来拥有又被掠夺的人的哀愁,后来的人又怎么懂呢?我曾经是拥有一切的,我曾经是拥有世界的,我站在这片土地上,呼吸的是自由的空气,饮下的是自由的琼浆玉液。就在长的无法计数的时间里,我自由生命的一部分又一部分就这么被杀死了,突然就杀死了。可我还始终觉得,它们还奄奄一息的活着,就像它们是慢慢的死去的一样。

       可它们终归是死了,而且随着它们的死,愈来愈多的事情慢慢的发生了,很慢很慢,几乎不被人察觉,可还是发生了。

       没有谷歌,我可以用百度呀。可某些结果被越挪越后,越挪越后,最后就不见了。就像本来就不该搜出这个结果一样。

       没有Facebook,我可以用校内呀。可你想发只有在Facebook上能发的文章,很快在校内上就失踪了。接着,校内变成了人人,话题变成了人人都关心的话题。大家都在抢着看星座、明星、八卦、娱乐。没有人会关心什么消失了,反正它们本来也没多少存在感。

       没有YouTube,我可以用优酷呀。可你却经常只能在优酷上看到抄袭别人的作品,而且还不署名,而且还洋洋得意,而且还自我陶醉,就好像那个idea本来属于他自己一样。你看了还要惊呼,他是如此的有创意!好一个抄袭的创意,可你却不知道,因为你不知道这个世界上有个网站叫YouTube。

       没有Twitter,我还可以用微博呀。可你想知道最近发生了什么,你搜的越勤快,越能看到越明显的“根据相关法律法规,相关搜索结果不予显示”。时间长了,你想,反正知道了也没什么用,不如不看了。

       慢慢的,一扇又一扇的门关上了。今天你打开世界上最大的博客网站,发现它没了。明天你一看,世界上最好的设计师分享网站没了,一开始是刷新的很慢很慢,后来它就没了。过两天再一看,平常每天都会读两篇文章的媒体网站没了,那里的文章缤纷多彩,最后都变成了该页无法显示几个字。再过几个月,大学的网站不让上了,摄影师的网站不让上了,就连百度日本这种自家网站,也没了。

       接着,漫画看不了了,接着,动画看不成了。接着,美剧英剧失踪了。下载美剧英剧的网站又又又又又失踪了。尊重正版,保护权益,行吧,然后字幕网站也没了。

       游戏没了,你习惯性登陆的游戏网站,发现下载栏正在整治中。论坛关了,天天都在看的论坛,突然接到相关部门的电话,因为“报备问题”不让办了。个人网站,私人博客,对不起,说没就没有,你在上面存了多少多年辛勤耕耘的东西都没用。

       你关注的人,有一天你登陆微博,发现他怎么好久都没说话了,然后你搜索了一下,发现他的账号不存在了,而且你搜他的名字,他的名字未予显示。

       一盏一盏的灯,灭了。四面八方的光源,消失了。我们生活的五光十色的世界,变成了一片黑色。

       天黑了,那么睡觉吧,但愿长醉不复醒,卧槽泥马勒戈壁。

       最后,我们变成了一群做梦的人,这个梦的名字,叫根据相关法律法规,相关搜索结果不予显示梦。

Wednesday, April 2, 2014

Sorting and efficiency analysis

        Hey, this is the last week of CSC148 and it’s nice to be back here and write down some final thoughts on sorting and efficiency over the last week.
        I am going to start this slog with a slight digression here. http://www.sorting-algorithms.com/ is a quite intriguing website where you could view the animation of different sorting algorithms and with the help of this animation, I will start talking about run-time complexity of different algorithms learned in the last two weeks.
        First of all, what is run-time complexity? Basically, run-time complexity features the growth rate of the total steps the algorithm would take as the size of input increases and we use big-Oh(O) to describe the asymptotic behavior the algorithm( check my previous slog http://siyangcsc148w3.blogspot.ca/2014/03/introduction-to-runtime-complexity.html for more information on big-Oh and how to quickly determine the run-time complexity of a procedural algorithm and http://siyangcsc148w3.blogspot.ca/2014/03/hi-its-nice-to-be-back.html for information on how to determine big-Oh of the recursive algorithm).  Since computers can’t run infinitely fast, there’s a need to come up with algorithms that can take fewer steps to address the same problem and here run-time complexity analysis will play a major role in determining whether the algorithm is more efficient or not.
        As you become more familiar with analyzing run-time complexity of algorithms, I can now start the analysis of sorting algorithm which becomes increasingly vital in recent years due to big data computation. I’ll begin with the procedural sorting algorithm we learnt – selection sort

Selection sort is the sorting algorithm that starts sorting from the beginning of a list by finding out the smallest element of the rest of list and exchange the smallest element with the element at the very beginning. Them the algorithm does the same thing for the list without the first element, without the first two elements…… until it reaches the last element of the list. We could verify this algorithm actually works by proving the loop invariant that the first i-th element are sorted after the i-th iteration of the loop shown in the picture using induction. I’m not going to spend time on the correctness of the algorithm as it’s beyond the scope of CSC148 but focus on analyzing its run-time complexity. Since this algorithm is a procedural algorithm, you can use the trick I mentioned in http://siyangcsc148w3.blogspot.ca/2014/03/introduction-to-runtime-complexity.html by counting the depth of loops that depend on the size of input. There is one loop that is quite obvious here – ‘ for I in range(len(L1)):’ and the time it will iterate depends on the length of the input – so it is at least O(n), but there is actually a loop hidden inside ‘min(L1([i:]))’ where every element in L1[i:] is compared with each other to determine the smallest element of L1[i:] and exchange it with L1[i]. Since the loop over L1[i:] also depends  on the length of L1, I can guess that run-time complexity for selection sort is O(n^2). And we can easily prove my guess by analyzing the steps take for each inner loop and outer loop. Notice here we only count the number of comparisons into total steps for simplicity. So for the inner loop where the algorithm finding out the smallest element of L1[i:], it would take n-i comparisons for each iteration and since the outer loop iterates n times ( n is the length of L1) with i incremented by 1 after each iteration, so the total number of comparisons for L1 with size n  is approximately n+(n-1)+(n-2)+……+1 = n(n+1)/2 which by the definition of big-Oh, belongs to O(n^2). Same strategy applies to both bubble sort – which continuously ‘bubbles’ the biggest element to the end of the list by comparing each element with the one next to it and insertion sort – which insert the element into an already sorted list. And they all have run-time complexity O(n^2). Notice that in the animation I mentioned above, bubble sort seems to be much more inefficient than insertion and selection sort, it still belongs to O(n^2) as we only care about the asymptotic behavior.
Then comes to the recursive sorting algorithm. I’m elaborate quick sort here instead of merge sort ( You can check http://siyangcsc148w3.blogspot.ca/2014/03/hi-its-nice-to-be-back.html for merge sort analysis through master theorem and telescoping). Let’s first take a look at quick sort –

 Quick sort uses recursion by picking a pivot(here we pick the first element of the list as the pivot) and divide the list into two parts where one part contains elements smaller than the pivot and the other part contains elements larger than the pivot and apply quick sort to these two parts using recursion and the pivot in between. Analyzing this algorithm, unlike mergesort which does not have best case and worse case, turns out to be much trickier. Let’s consider the a balanced list where half of the element is less than the first element while the other half is larger than the first element. It turns out we will split the list into half after each call and at the end, we need to combine these elements together by making copies of the list sorted which takes O(n) complexity. It’s kind of similar to merge sort in the slog I mentioned earlier and it’s quite obvious the run-time complexity for quick sort is O(nlogn) for a balanced list. However, what if the list is already sorted, it turns every call only splits the list into the pivot and a list of all other elements which takes n calls to reach the base case so its run-time complexity is O(n^2) and this is the worst case for quick sort. O(n^2) as we all know is not efficient at all so we must find out a way to resolve this problem. It turns out by randomly select a pivot or select the middle of the list as pivot could bring the worst case run-time complexity down to O(nlogn) as the list will become more balanced when dividing.
        So I’ve talked about both procedural sorting algorithm and recursive algorithm, here I would like to make a brief summary of how to quickly find out the run-time complexity – For procedural algorithms, as I’ve talked about before, it’s more associated with the depth of nested loop which depends on the size of input while for recursive sorting algorithms, it’s more associated with the number of dividing that leads to the base case and the steps needed for processing the result of smaller cases. Below is a chart of the time complexity of the algorithms we’ve talked about

Notice quick sort is O(nlogn) only when the pivot is picked randomly or in the middle. Count sort only deal with lists that contain elements of value less than the length of the list.

So this is the end of my slog. Thanks for reading. Also, thanks to the TA who grades my slog.

Monday, March 24, 2014

Strong Induction and run-time complexity for recursive functions

        Hi, it’s nice to be back. In my previous slog, I mentioned about analyzing the run-time complexity of recursive function and the master theorem. So I will spend most of the time on it today by using merge sort as an example.

        At the very beginning, I would like to introduce a digression here – The effectiveness of strong induction in writing recursive function. The idea of strong induction is by having a functional base case and assuming the function works for all the smaller cases, the function will eventually solve a bigger problem. Like the inorder traversal we did in Lab9, I simply put

So I have a functional base case -  if a binary tree does not have left and right sub-tree, simply return the linked list with the root of the tree. Then there are three difference cases – A tree without left sub-tree, a tree without right sub-tree, and a complete tree. For the first two different cases, by strong induction, I assume my function works for smaller cases so I just changed where I prepend the root of the tree and simply call the function on sub-trees. Then for the last case, also by strong induction, I called the function on right sub-tree, prepend the root, then prepend the linked-list generated by calling ‘inorder’ on the left sub-tree. And then I’m done. By assuming the function works for smaller cases, I eventually solved a larger case. Note you need to change the prepend method a bit for prepending linked-lists. So through strong induction, all I need to do is to find out the base case and use smaller cases to construct a larger case and it will provide much ease when writing recursive function.

    It’s a huge digression actually ha-ha! Now I’m back on track. I’ll take the merge sort we did in class as an example.
Notice this is not the complete version since I don’t include merge function here due to space limit. So for merge sort, by strong induction, this should work as it has a valid base case and uses smaller cases to construct a larger case. But its run-time complexity is not easy to find out. Since the run-time complexity depends only on worst cases, I’ll ignore the condition when length of L is smaller than 2 but analyze the else part. The very first thing we need to do is to assume the total steps merge sort takes are M(n) steps for a list of length n. Then by our assumption, both merge_sort(L[:len(L)//2]) and merge_sort(L[len(L)//2:]) will take approximately M(n/2) steps by ignoring the constants because of the nature of big-Oh. For merge, since it’s a procedural function, we can easily analyze its total steps for merging two lists of n/2 length is c*n where c is a constant. So
M(n) = M(n/2) + M(n/2) + cn = 2M(n/2) + cn = 2(2M(n/4)+cn/2) + cn/4 = …. So by telescoping this equation , 

we can see M(n)/n  = logn eventually, so M(n) o(nlogn). For more information on proving this, I suggest you check this powerpoint ( http://www.cs.princeton.edu/courses/archive/spr07/cos226/lectures/04MergeQuick.pdf ). So the idea is to firstly find the recurrence for run-time complexity and then telescoping it. Eventually the recurrence could be solve and the big-Oh notation will come through.


    That’s it and good luck on term test2.

Monday, March 17, 2014

Introduction to Runtime Complexity

   Hey, sorry that this slog is a bit late since I was quite sick last week but now since I’ve come back, I can write down some of my own reflective on run-time complexity.
               First of all, what is the run-time complexity for a program? Basically it’s the total steps a program would take given the size of input n, and we formally use O(f(n)) to describe the asymptotic behavior of the program and here O(f(n)) is defined as ‘There exists a natural number c, and there exists a natural number b, such that for all n bigger than b, the total steps of the program would be less than or equal to c*f(n)’. Quite obscure and doesn’t seem to make sense at the first glance so here I am going to introduce my own interpretation: For a run-time complexity of a program to be in O(f(n)), the growth rate of the total steps of the program with respect to the size of input is less than or equal to f(n). So if the total steps a program takes is n^2 given the size of input n, then we will say the run-time complexity of the program belongs to O(n^2).
               Next, why should we even care about the run-time complexity of a program? Some may claim that as long as the program works, it is a good program. This makes sense when the input is relatively small but suppose here we are given the size 1 million, and one program has a run-time complexity of log(n) while the other has n^2. Log of one million is about 20 while the square of one million is one times ten to the 12th and this is a huge difference since computers nowadays, though fast, still have limits and it’s obvious 20 steps takes much less time. So correctness differentiate bad and good programs while efficiency differentiate good and better programs.
               As I have told about the interpretation of run-time complexity and the reason why we should care about run-time complexity. I would like to go a bit further by telling you how to analyze the run-time complexity of a program. There’s a small trick I found: The depth of a for loop depends on the input size usually determines the power of n in O(n^x). Take the following program as an example.

This program has a nested for loop so its depth is 2 and since both for loop depends on the size of input x. Thus the inner loop runs x times and the outer loop runs the inner loop x times and the run-time complexity of this program is, obviously O(n^2). Also, always remember that constants normally would be ignored when calculating the run-time complexity because of the definition of O(f(n)).

              Though my trick works for many programs, it has a limitation that such trick could only be applied in procedural programs while for recursion, more work needs to be done. I was stuck on how to calculate the run-time complexity for recursive programs until this week in CSC240’s lecture. I was told the master theorem (check http://en.wikipedia.org/wiki/Master_theorem for more info.) and then I have the weapon to tackle the run-time complexity of recursive programs. I’m still on the way of learning and I’ll come back later this week to write more on analyzing recursive programs.

Sunday, March 9, 2014

What is linked list and when to use linked list


        During the last two week, we spent most of the time dealing with linked lists and I am always wondering why I should use linked lists. Now that I finally have a week without midterms, I can write down some thoughts.
First of all, what is a linked list? In my own viewpoint, linked list contains elements with two inner attributes and the first attribute points to an object while the second attribute referenced the element behind it. This might be a bit abstract but if you look the following graph, I am sure you can easily grasp the idea of a linked list.

        Like what I just said before, the first attribute points to a value while the second attribute serves as a pointer pointing to the next element so if you want to get the second element of the linked list, you need to follow the references from the first element to reach the second element and same for the third element, and the fourth element, etc. This idea of extracting the element from the previous element is the idea of recursion where the solution to one task depends on the solution to the previous task and the code is as follows. (Check http://siyangcsc148w3.blogspot.ca/2014/02/what-is-recursion.html for more on recursion if interested)

        However, when I was doing the lab on week 6, I found there is a serious problem associated with this way of getting elements. As I stated before, most languages have a maximum recursion limit to prevent the program from getting stuck in infinite loops, so suppose we have a linked list of 10,000 elements and we want to know the 5,000-th element, then due to the recursion limit, we cannot get the 5000-th element since it requires 4999 recursion which is far beyond the maximum stack limit which inevitably resulted in an error. So the only way to access an element is through step by step, find the previous element, then to the next element and this might seem quite inefficient compared with lists where we can just use the index of an element to reach that element. Thus someone may say, there is no need to have linked list and list is enough. I will, under most cases, agree with that statement but there are some cases where linked list is much more efficient since linked list allows for constant-time(O(1)) insertion and deletion of an element since during insertion of one element, all we did is to get to that position and change the pointer of the previous element to the element we want to insert and set the pointer of the newly added element to the original element in that position while not affecting other element. However, if we want to insert an element to the middle of a list, we had to move every element after it to the right to leave space for inserting the element we wanted.  As the length of lists increased, the time it takes to move all elements to the right gets longer and longer which is, obviously, quite inefficient. So for random elements, yes, we should use list since it allows for random indexing but if for sorted elements where we know exactly which place to insert an element, we definitely should use linked list. This idea is applied in binary-tree search in which a tree is built with nodes placed in order, and we could easily insert a node, changing the pointers or access that element from the root under the order we designed without concerning too much about the length of the linked list.

To sum up, when we want to place different object without any order, a list is a better approach to store these objects since list allows for random indexing, but if we want to place objects in an order, then linked list is a better approach since it allows for constant-time insertion and deletion.

Friday, February 28, 2014

What is Recursion

It’s the seventh week of CSC148 and I took the midterm for CSC148 as well as CSC240 and I don't have much time to sit down and think about what is recursion, but since I’ve survived the week, I finally had some time to write down my thoughts on recursion.

Recursion, in my own viewpoint, is to divide a problem into several smaller problems which are easier to solve and the solutions of these smaller problems, when combined together, formed the complete solution to the original problem. Here I take a quite simple example: the Fibonacci sequence (see http://siyangcsc148w3.blogspot.ca/2014/01/some-thoughts-on-recursion.html for detail). Suppose we want to know the nth element in the sequence, then we need to know the (n-1)th and (n-2)th element in the sequence since all elements in the Fibonacci sequence is the sum of two elements that are immediately precede it and to know the two elements, same process will be applied until we reach the first two elements which are defined to both be 1. So the process is actually breaking bigger problems down to smaller problems until a solution is reached and then the solution is passed back and combined to form the complete solution. Here the solution to the smallest problem is called the base case and the process of reaching complete solution is called recursive steps or constructor cases.

But then comes the question: why should we use recursion? I’ll say that the use of recursion could significantly reduce the length and complexity of the program when used properly. (Check my previous post http://siyangcsc148w3.blogspot.ca/2014/01/some-thoughts-on-recursion.html for detail). Like the function we use in assignment1 for moving cheeses among four stools. A recursively defined process only requires 3 lines to reach the goal while it would be quite lengthy to write a procedural function since there are so many cases that one could easily miss something and even one does consider all cases, the code could be quite lengthy and became harder to understand than the 3-line recursive function. However when can we say is a good time for recursion? Certainly, many recursion could be written in for loops like the problem 1 in the extra section of lab 4 – write a recursion function that reverses string s and it could be easily written in for loops as we learned in CSC108, and it only requires 3 lines. So reverse a string might not be a good time for us to use recursion since recursion neither reduces the length of the code nor makes it easier to understand. Nevertheless, there is someplace that recursion could play an important role – when a program requires consideration of many different cases in for loops, recursion could be quite useful as the assignment1 I mentioned earlier. So, when thinking about using recursion, first consider the for-loop version of the program, and if the for-loop version doesn’t require considering too many cases and is not hard to understand, then use for loops instead of recursion. Also, if we want different solutions from the same problem, then recursion is, I’ll say, useless since recursion is based on the assumption that smaller problem only has one unique solution which can be used to construct the solution for bigger problems. One more thing, recursion couldn’t be used on particularly large problems since most programming languages limit the recursion depth to a certain level to avoid crashing of the whole system so when encountering particularly large problems, either use specially designed recursion or find another way without recursion.


In the end, I want to discuss a bit about debugging recursion. Basically a recursion needs a base case and recursive steps that lead to the base case or at least, smaller cases. You can find more on my previous slog http://siyangcsc148w3.blogspot.ca/2014/02/at-end-of-this-slog-httptracycsc148.html.

Monday, February 17, 2014

Debugging recursion in a systematic way

At the end of this slog http://tracycsc148.blogspot.ca/2014/02/csc148-slog-object-oriented-programming.html, the author mentioned “a recursive function will not stop running until the condition in the 'else' part is met” which sounds easy to understand but actually hard to recognize such an error when actually coding. So here I am about to talk about how I debug for recursive functions.
               First of all, the most obvious one – syntax error. It’s obvious doesn’t mean it’s always easy to recognize. Like the preorder function for Trees,

(This is not quite right. Please refer to CSC148 website for the correct version)
When brackets and parenthesis get mixed up, it became much harder to detect where we missed a right parenthesis or a bracket at a glimpse and it’s very likely that we need spend quite a while reading line by line if coding larger projects. Such thing is inevitable but we can always try to avoid it by splitting them into different blocks – instead of writing ternary if, writing in different blocks as we did in CSC108 makes such ‘silly’ error more apparent and may save much time.
               Then comes to the run-time error where under most circumstances problems occur. Most people trying to write recursive function must have experienced error message like ‘Maximum recursion depth exceeded’ which indicated the program falls into an infinite loop. To fix such an error, the very first thing, to me, is to test the base case – an infinite loop always mean the program can’t get to the base case or the base case is in an infinite loop. So it is essential to trace the base case and make sure it’s functioning. For instance of the Fibonacci sequence I wrote in my previous slog (http://siyangcsc148w3.blogspot.ca/2014/01/some-thoughts-on-recursion.html), a functioning base case is essential for keeping the program getting into infinite loop. After the base case, check the constructor cases. Starting from small cases, like the third element in Fibonacci sequence and print out things returned by every recursive step. Still, I will take my fib function as an example.

What I did is to print tuples made up of the fib result and its associated number.
Suppose I type fib(4), it will print (1,2) and (1,1) ,(2,3)and return 2. So I will know fib(2) , fib(1) and fib(3) are all working and if it’s not working, I could quickly find out at which point the recursion failed and fix it.

I used this method in Assignment 1 where Tour module worked for 8,9,10 and 11 cheeses but not for 12 cheeses. As I print out all the recursion steps, I found my initial set up for 2 cheeses is wrong where I set i=0 for 2 cheeses instead of 1. And it saves huge amount of time than tracing by hand or wondering around.