Camryn’s blog

Isabelle’s blog

Dre’s blog

  • Take aways
    Source: Dre's blog Published on 2022-05-13
  • Take aways
    Source: Dre's blog Published on 2022-05-13
  • 5/2/22 blogpost
    Source: Dre's blog Published on 2022-05-02
  • 5/2/22 blogpost
    Source: Dre's blog Published on 2022-05-02
  • 4/20 Blog
    Source: Dre's blog Published on 2022-04-22
  • 4/20 Blog
    Source: Dre's blog Published on 2022-04-22
  • class 2/9/22
    Source: Dre's blog Published on 2022-02-10
  • class 2/9/22
    Source: Dre's blog Published on 2022-02-10
  • R1
    Source: Dre's blog Published on 2022-02-01
  • R1
    Source: Dre's blog Published on 2022-02-01
  • Hello world!
    Source: Dre's blog Published on 2020-02-10
  • Hello world!
    Source: Dre's blog Published on 2020-02-10

4/25 Class RecapBlog – the digital age

Happy Monday everyone!

In today’s lecture, we mainly talked about Data Path and Memory. We started from the general process of a full adder circuit: Input A&B —-> sometimes with 1 carry out from Cout

Then Arithmetic logic unit (in a highly conceptual way)

The wires that are connected with each component are buses!  They transfer data from one to another.

Data can be stored in different places and the computer has different speeds to access data from different places. Such Memory Hierarchy from the faster to the lowerest is Registers (brain) ->Cache (paper on your desk) -> RAM (book on your shelf) -> Hard Drive (interlibrary loan).

We then mainly discuss RAM. RAM (Random Access Memory) skips linear search and gets the data directly (addressable memory) Example: we don’t have to go through the whole list of the songs (linear search), while instead we can click and play the song you wanted directly (addressable memory).

After going through all the concepts, we then turned to the practical part to better understand a computer’s mechanical operations, by using a simulation of CPU architecture to learn more about how instruction is processed along the CPU data path cycle and how values from memory are incorporated into CPU instructions.

http://www.dave-reed.com/book/Chapter14/datapath/datapath.html

http://www.dave-reed.com/book/Chapter14/dpandmem.html

This activity reminds me of the basic circulation while in a more complex way.

We talked a lot of new technical concepts in this class.  Feel free to add any additional information I missed and you would like to add to this recap in the comments!

Muqi Guo Apr 22nd Recap Muqi Guo

Today we mainly focused on the complexities of search and sort algorithms. As computer scientists, we divide such topics into two categories: time complexity and space complexity. While we mainly focused on time complexity because we can easily measure them, space complexity described the amount of memory space any given algorithm takes up. Usually, the less space an algorithm uses, the better. We use run time to describe time complexity,

For example, the runtime of binary search is O(logN) and O(N) for linear search.

Binary search divides the array by half during each loop thereby decreasing into subproblems 2^N times with array size N.

Sort

Professor Rodrigues walked through the process of insertion sort on the whiteboard.

On a high level, insertion sort works by iterating through each element in the list and swapping higher value with preceding values as needed until all values on the left of the boundary are sorted.

However, insertion sort is not the most efficient sorting algorithm as its worst-case runtime is N^2 which is not ideal.

Merge sort on the other hand is more efficient with O(N * logN) runtime. Merge sort is more efficient because it utilizes the divide and conquer technique which is commonly used to divide bigger problems into various smaller subproblems and then merge the solution together to get the results, similar to binary search.

We also talked about Bogosort which is super inefficient but it was interesting to learn about it.

Big O

We also talked about big O notation which is used to describe the runtime of an algorithm. The symbol “O” defines the upper limit of a mathematical equation. Since we care about the worst-case runtime of an algorithm, we usually use Big O notation to describe how efficient an algorithm is.

For example: constant O(1), logN, nlogn, n^2, 2^n, n!

We then played around with CS city guide trip problem whose runtime grow very quickly as the we increase the size of input.

The halting problem is proposed by Alan Turing.

Basically, we have an algorithm described as below:

if loop forever:

return 0

else:

return 1

But the caveat is that we define such loop as: evil genius: T = never terminate, F = terminates.

It creates a paradox because if evil genius loops forever, then it should return True, but that’s not the case since we defined the algorithm to return false if loops forever.

Thus the halting problem is unsolvable.

Muqi Guo

4/20 Blog

We started off the class looking at visualizations of sorting algorithms, to help us understand what is actually moving in them. Insertion sort seemed to be one of the faster methods for algorithms, while merge sort appears to be a lot more complex 

 

The lab today was to create a form in python, where people had to input “valid” information, which helps clear useless information, and makes things like replying to users, much easier. There’s more than one way to complete validation problems in many different ways. We joined Riplit as well, which was a code sharing platform, which makes it easier to share codes among ourselves. Mrs. Rodrigues sent us all an email to join if you haven’t already!

We learned a few new lines of code to help sort valid information, such as:

.strip() == “”: If the user inputs nothing, multiple spaces( tabes, enter (new lines), etc.), non-positive numbers (a, ten, !, -10), this will return an error

.isdigit() == False: returns true if all the characters are digits, otherwise it returns false.

Read More

4/18 Recap My blog

In today’s class, we first discussed the effects of algorithms used by search engines. Professor Rodrigues introduces us the mechanic, benefits, and drawbacks of the first algorithm(PageRank) used by google. From this real-life example, it seems that although algorithms are powerful tools, human intervention is needed to improve and revise the algorithms so that they won’t cause discrimination or other ethical issues.

We then move to the concept of sorting methods. The first sorting method we learned is called insertion sort. It iterates from the first element to the last element in the array. In each iteration, the algorithm compares the current element to its predecessor. If it is smaller than its predecessor, then the two elements are swapped. Then this process is repeated until the current element moves to the left end of the array or it is bigger than its predecessor. We use a nested for loop to implement insertion sort, which means that it has a time complexity of O(n^2) . Insertion sort is better used when the length of the array is small or most elements in the array are already sorted.

This example shows how insertion sort works:

Next, we also learned another sorting method called merge sort. The idea it uses is to repeatedly separate an array in to subarrays until all the subarrays have length 1. Then it begins to merge these subarrays together to a new array. To build up the merged array, the current smallest element left in the two subarrays is plugged in first. Then this process is repeated until all the elements in the two subarrays are pushed into the new array. Merge sort has a time complexity of O(N*logN) and is better used when the array length is large and most elements in the array are not sorted.

The example below shows how merge sort works: