In week 11 of csc148, two powerful functions of python are introduced in lectures, which are property() and reduce().
Function property() takes 4 parameters(fget, fset, fdel, doc). fget, fset, fdel are three methods in a class, they are used to return the value, set the value, del the value. We can add some restrictions to these methods.
def set_symbol(self: 'RegexTreeNode', s: str) -> None:
"""set private symbol"""
if not s in '01e|.*':
raise Exception('Invalid symbol: {}'.format(s))
else:
self._symbol = s
then, if we add symbol = property(get_symbol, set_symbol, None, None) in the class
>>> a = RegexTreeNode('1',[])
>>> a.symbol = 'a'
it will raise exception
Another functionality of this function is to prevent the value from being set or deleted by users.(i.e. read-only). If we add symbol = property(get_symbol, None, None, None) in the class, we can no longer set the value of object.symbol.
Above all, by indicating a = property() can create a new attribute "a" of the class, and we can set some restrictions on the getting, setting, deleting of that attribute.
The second powerful function is reduce(). It takes three arguments(function, iterable, initializer). For example, reduce(lambda x, y: x + y, [1,2,3,4,5], 0) calculates 1+2+3+4+5 = 15. The first argument must be a function which takes two arguments. The third argument is the initial value, without which there will be an error if the list is empty list. The reduce() function makes it simple to apply a function on an iterable object over and over again and finally get reduce the iterable object into one value.
2013年11月26日星期二
2013年11月8日星期五
csc148_slog_sorting_algorithm_and_linked_traversal
In the week 9 of csc148, we learned some sorting algorithm and their time complexity in the lecture, and we learned how to traverse the binary tree into linked list in the lab.
The first sorting algorithm is selecting sort. The basic idea is to select the max(min) element and swap it with the first element. And do it for the sublist(list[i:]) over and over again. The time complexity is O(n^2), because it needs 1+2+3+...n steps to finish.
The second sorting algorithm is quick sort. The basic idea is to quick sort(recursion here!) the set of elements which are less than L[0], quick sort(recursion) the set of elements which are greater than L[0], then return the first stuff + L[0] + second stuff. The worst case time complexity is O(n^2) when L[0] is always the smallest(greatest) element for every recursion. The best case time complexity is O(n*logn) because we split all the elements into halves and for every split we need n operations to get them together.
The third sorting algorithm is merge sort, which seems more complicated. I found a picture in Wiki which I think perfectly explain how merge sort works. The basic idea of the merge sort is to merge(We have to write a helper function about merge) the two halves which is merge sorted(recursion here!). The time complexity is O(n*logn) even if it is the worse case.
The fourth sorting algorithm is counting sort. The time complexity is O(n+k), k is the greatest element in the list, which seems perfect. The basic idea is to create another list2 and use the index of list2 to indicate the number. For example, if the list to sort is [1,2,3,5,1], then we create list2, which is [0,2,1,1,0,1]. list2[1] = 2, indicating that there are two '1's. list2[5] = 1, indicating that there is a '5'. Then we have to create a sorted list by traversing list2 (list2[0] = 0, do nothing, list2[1] = 2, append two '1', list2[2] = 1, append a '2'......).
Currently, I can get time complexity of some algorithm intuitively but I can't prove the time complexity rigorously. Maybe I will learn how to deal with it in 200-level and 300-level csc courses.
In the lab, we learned how to traverse a binary tree and return two linked list nodes(In the past, we have learned how to return a normal python list). At first, I am very confused about why we have to return two linked list nodes(first node of the traversal, last node of the traversal) rather than one linked list node and I just have no idea to construct this recursive function. After searching in Google, I find that the perfect way is to call recursion on the left sub tree and right sub tree, link the last node of the left sub tree to the root, link root to the first node of the right sub tree, then all nodes are linked!!
The first sorting algorithm is selecting sort. The basic idea is to select the max(min) element and swap it with the first element. And do it for the sublist(list[i:]) over and over again. The time complexity is O(n^2), because it needs 1+2+3+...n steps to finish.
The second sorting algorithm is quick sort. The basic idea is to quick sort(recursion here!) the set of elements which are less than L[0], quick sort(recursion) the set of elements which are greater than L[0], then return the first stuff + L[0] + second stuff. The worst case time complexity is O(n^2) when L[0] is always the smallest(greatest) element for every recursion. The best case time complexity is O(n*logn) because we split all the elements into halves and for every split we need n operations to get them together.
The third sorting algorithm is merge sort, which seems more complicated. I found a picture in Wiki which I think perfectly explain how merge sort works. The basic idea of the merge sort is to merge(We have to write a helper function about merge) the two halves which is merge sorted(recursion here!). The time complexity is O(n*logn) even if it is the worse case.
The fourth sorting algorithm is counting sort. The time complexity is O(n+k), k is the greatest element in the list, which seems perfect. The basic idea is to create another list2 and use the index of list2 to indicate the number. For example, if the list to sort is [1,2,3,5,1], then we create list2, which is [0,2,1,1,0,1]. list2[1] = 2, indicating that there are two '1's. list2[5] = 1, indicating that there is a '5'. Then we have to create a sorted list by traversing list2 (list2[0] = 0, do nothing, list2[1] = 2, append two '1', list2[2] = 1, append a '2'......).
Currently, I can get time complexity of some algorithm intuitively but I can't prove the time complexity rigorously. Maybe I will learn how to deal with it in 200-level and 300-level csc courses.
In the lab, we learned how to traverse a binary tree and return two linked list nodes(In the past, we have learned how to return a normal python list). At first, I am very confused about why we have to return two linked list nodes(first node of the traversal, last node of the traversal) rather than one linked list node and I just have no idea to construct this recursive function. After searching in Google, I find that the perfect way is to call recursion on the left sub tree and right sub tree, link the last node of the left sub tree to the root, link root to the first node of the right sub tree, then all nodes are linked!!
2013年11月4日星期一
csc148_slog_binary_search_tree
In week 8, we learned something about binary search tree and the implementation of some methods of BST such as insert,delete(in lecture), count_less(in lab), __contains, max_node(in exercise).
The basic idea to implement insert and delete is to use recursion. The algorithm of delete is comparably more difficult because when we delete a node(when data == node.data), we have to make it replaced by another node which doesn't break the rule of BST. Here we want want to find the max node of left sub-node(or min node of right sub-node) and use it to replace the deleted node.
Another challenging topic in this week is the conversion from recursive algorithm to iterative algorithm. Actually, we can find the corresponding iterative algorithm for every recursive algorithm if we have pondered the essence of the recursion. Here I take Fibonacci sequence as example: the basic idea of the recursive algorithm is showed in the picture below.
Here we can code in iteration but using the idea of recursion.(We can see that recursion is just like popping a element from a stack, make some calculation, and put back the recursion function into the stack, and do it over and over again until the stack is empty)
def function(x):
c = 0
stack = [x]
while stack:
p = stack.pop()
if p > 1:
stack.append(p - 1)
stack.append(p - 2)
elif p == 1:
c = c + 1
return c
The basic idea to implement insert and delete is to use recursion. The algorithm of delete is comparably more difficult because when we delete a node(when data == node.data), we have to make it replaced by another node which doesn't break the rule of BST. Here we want want to find the max node of left sub-node(or min node of right sub-node) and use it to replace the deleted node.
Another challenging topic in this week is the conversion from recursive algorithm to iterative algorithm. Actually, we can find the corresponding iterative algorithm for every recursive algorithm if we have pondered the essence of the recursion. Here I take Fibonacci sequence as example: the basic idea of the recursive algorithm is showed in the picture below.
Here we can code in iteration but using the idea of recursion.(We can see that recursion is just like popping a element from a stack, make some calculation, and put back the recursion function into the stack, and do it over and over again until the stack is empty)
def function(x):
c = 0
stack = [x]
while stack:
p = stack.pop()
if p > 1:
stack.append(p - 1)
stack.append(p - 2)
elif p == 1:
c = c + 1
return c
订阅:
博文 (Atom)