*** APA FORMAT (Included references and in-text citations)
*** 5 PAGES
Select one paper that meets all of the following criteria:
1. It comes from an academic journal
2. It includes data (graphs, tables, numbers).
3. It has something to do with Operating Systems.
The paper selected may be one from the reading list, but it does not have to be.
Writing your Critique
Your critique will probably be on the order of three to five pages although there will be exceptions. In your critique you must at least answer the following questions (depending on the paper, there will be other things to discuss as well).
· What is the purpose of the paper?
· What is the hypothesis that the authors are testing?
· What is the experimental/interview setup?
· What is good/bad about the setup?
· How well was the research carried out? What results are presented?
· Do you believe the results? Why/Why not?
· What things might you have done differently?
· What lessons did you learn from reading this paper critically?
Suggested Papers (must be something on the list below)
Here are some suggested papers. If you chose something not on this list, check with me before beginning the assignment to make sure that the task you are undertaking is reasonable.
1. Agrawal 2009, Generating Realistic Impressions for File-System Benchmarking. Take measurements of a real syste, generate an impressions file system and them compare (as is done in Figure 2).
2. Baker 1991, Measurements of a Distributed File System. Process the traces to reproduce some/all of the graphs or write a simulator to reproduce some of the results from the second half of the paper.
3. Blackwell 1995, Heuristic Cleaning Algorithms in Log-Structured File Systems (appeared in the 1995 Usenix Technical Conference). Use the trace-data that is available and reproduce some of the graphs.
4. Blake 2003, High Availability, Scalable Storage, Dynamic Peer Networks: Pick Two (appeared in the 2003 Hot Topics in Operating Systems). Reproduce the graph in Section 4.1.
5. Brown 1995, Benchmarking in the Wake of Lmbench: A Case Study of the Performance of NetBSD on the Intel x86 Architecture. Find a range of Intel processors and try to reproduce some of the hierarchical decompositions.
6. Cadar 2008, Klee: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs. Download their tool (it’s not on a Stanford site, it’s at llvm.org) and try it on some of the workloads they used.
7. Chen 1995, The Measured Performance of Personal Computer Operating Systems. Try rerunning some of the microbenchmarks on modern machines and modern versions of the operating systems in the paper.
8. Dahlin 1994, A Quantitative Analysis of Cache Policies for Scalable Network File Systems (appeared in the 1994 Sigmetrics). Use the Sprite traces and reproduce some of their graphs.
9. Ellard 2003, Passive NFS tracing of Email Workloads (appeared in the 2003 USENIX FAST conference). Process the traces with your own tools and produce some of the graphs.
10. Harnik 2013 To Zip or not to Zip: Effective Resource Usage for Real-Time Compression. See if you can get the same kinds of compression timings that the authors got.
11. Holland 2013 Flash Caching on the Storage Client. Build a simple simulator that tackles a small subset of the design space and see if you can get results similar to those of the authors.
12. Howard 1988, Scale and Performance in a Distributed File System (on the reading list). Recreate the Andrew benchmark results.
13. Koller 2013: Write Policies for Host-side Flash Caches. Start with the analytical results from Figure 1. Then see if you can put together a system that looks something like what the authors did and see if you can run any of their benchmarks.
14. Kyrola 2012 GraphChi: Large-Scale Graph Computation on Just a PC. Most of the graphs from this paper are available from the SNAP repository and many of the systems against which to compare are open source.
15. Mao 2012 Cache Craftiness for Fast Multicore Key-Value Storage. The software described here is available here. See if you can reproduce any of figures 9 – 11.
16. McKusick 1984, A Fast File System for UNIX. Regenerate (and explain how) the numbers in Table 1; generate results similar to those in tables IIa and IIb for different block/fragment size combinations on an FFS.
17. McSherry 2013 Scalability! But at what COST? These are all graph processing problems on a PC; how hard can that be?
18. McVoy 1996, lmbench: Portable Tools for Performance Analysis (appeared in the 1996 USENIX technical conference). Run lmbench on some platforms similar/identical to the ones described.
19. Megiddo 2003, RC: A Self-tuning Low Overhead Replacement Cache (appeared in the 2003 USENIX FAST conference). Implement or simulate ARC and produce results.
20. Muniswamy-Reddy 2006, Provenance-Aware Storage Systems. Get PASS running and run some of the benchmarks.
21. Oppenheimer 2006, Service Placement in a Shared Wide-Area Platform. Obtain their traces and see if you can reproduce some graphs.
22. Qiao 2006, Structured and unstructured overlays under the microscope. Pick some existing systems in deployment today and conduct analyses like those done in the paper.
23. Rosenblum 1992, The Design and Implementation of a Log-Structured File System. Recreate the micro-benchmarks section.
24. Roy 2013 X-Stream: Edge-centric Graph Processing using Streaming Partitions. This paper has a lot of different data – not just run time. Trying to reproduce it should be, um, fun.
25. Small 1996, A Comparison of OS Extension Technologies). Select one of the grafts and a few technologies or one of the technologies and a few of the grafts.
26. Smith 1996, Comparison of FFS Disk Allocation Algorithms. Try using the information available on the Web page.
27. Sumbaly 2012 Serving Large-scale Batch Computed Data with Project Voldemort. Using the publicly available Voldemort and MySQL releases, see if you can reproduce any of the graphs in the evaluation.
28. Volos 2014 Aerie: flexible file-system interfaces to storage-class memory. See if you can reproduce Figure 1.
29. Waldspurger 1994, Lottery Scheduling: Flexible, Proportional-Share Resource Management. Try your hand at modifying a scheduler and running some of the MPEG results.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.Read more
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.Read more
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.Read more
Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.Read more
By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.Read more