HomeHome  PortalPortal  CalendarCalendar  FAQFAQ  SearchSearch  MemberlistMemberlist  UsergroupsUsergroups  RegisterRegister  Log inLog in  

Share | 

 Assignment 1 (Due: before July 14, 2009, 13:00hrs)

View previous topic View next topic Go down 

Posts : 182
Points : 447
Join date : 2009-06-18

PostSubject: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Wed Jul 01, 2009 4:49 pm

Read three published scientific papers (of varying quality) and write a short report for each of them. Successful completion of this paper is compulsory to do the rest of the assignments. GOD BLESS ..

Some TIPS in writing a short report/review/reflections

Provide a concise summary of the paper.
Evaluate the paper by way of assessment, identifying positive and negative sides, unclear points etc., regarding substance, presentation, formats, figures, etc)
Keep to the point at issue, and have a respectful and constructive attitude
The review must be performed in a friendly atmosphere and with a humble mind
Back to top Go down
View user profile
Kate Mariel Dizon

Posts : 58
Points : 71
Join date : 2009-06-19
Age : 25
Location : Davao City, Philippines

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Tue Jul 14, 2009 2:55 pm

I hope this is friendly enough and constructive enough. And I hope, more than anything, that this is correct... haha.
Cultural Gaze Behavior to Improve the Appearance of Virtual Agents
Nikolaus Bee and Elizabeth Andre
University of Augsburg
Institute of Computer Science
Augsburg, Germany


A virtual agent is usually a program that serves as an online customer service representative for a company. They have the appearance of a human and respond to customer questions. Thus, they are useful in customer relationship management.

The paper is about studying the eye gaze behavior of different cultures to improve the interaction of virtual agents. Here, the authors believe that eye gaze is important when communicating with a virtual agent. The paper aims to measure users’ eye gaze and build a gaze awareness model that could be applied during human-machine (user-virtual agent) interaction.

To improve the interaction between humans and machines, Andre and Bee conducted an experiment where they grouped the functions of eye gaze into five: providing information, regulating interaction, expressing intimacy, exercising social control, and facilitating service or task goals. They also studied the eye gaze of different cultures to mediate the eye gaze of virtual agents so that it would go well with humans from different cultures during interaction. By using eye trackers, they plan to monitor the users’ eye gaze during the human-machine interaction close to real-time.

Bee and Andre reached the conclusion that it is difficult to provide standardized results in eye gaze behavior. This is due to the absence of a norm to study the eye gaze of different cultures and the studies conducted before having different goals. However, they were able to produce a set of findings that can help implement models that are aware of strong cultural distinctions.


I would like to begin my evaluation with the Abstract of the paper. It was only one paragraph but it was enough to catch my attention. It was the first thing I read that made me want to read the rest. The abstract was brief and interesting. They stated their goals for the paper and their reasons for doing the study. However, I don’t think they included a summary of their methodology in the abstract.

The rest of the paper was more of a compilation of related literature about eye gaze and human-machine interaction. I am not sure if they performed anything technical in this study, but based on what I’ve read, the study is mostly observational, thus social. I think, if this will be continued to further research, it can be considered an example of what RSG said about a combination of social and technical research. This first paper is probably more of observation, building up their literature. The authors mentioned somewhere in the paper that they “plan” to monitor the users’ eye gaze using eye trackers- this is probably the next technical step for the study.

Regarding the format, the paper did not follow the standard format that we learned in class. However, it did have the Abstract and Conclusion. A large part of it was the related literature. I don’t see anything wrong with the format because I think it was organized and well-presented so it doesn’t matter if it didn’t follow the standard.

Knowledge Production in Nanomaterials:
An Application of Spatial Filtering
to Regional Systems of Innovation

Christoph Grimpe and Roberto Patuelli


This paper was quite difficult to summarize, but I’ll try my best. Nanotechnology has been identified as one of the key technologies of the 21st century and is believed to contribute substantially to innovativeness, economic growth and employment. Thus, it is important for regions to put their own innovations in place to be able to benefit from the growth expected from nanomaterial applications.

Grimpe and Patuelli investigated what conditions and configurations allow a regional innovation system to be competitive in cutting-edge technology such as the nanotechnology. To determine how many localized research and development in the public and private sector, they analyzed the nanomaterial applications data at the European Office, German district. They also used spatial filtering approach and the knowledge production function framework to analyze the results of their study.

Based on the results, it was found out that there was a favorable specialization in chemistry and electronics, two fields that are closely related to nanotechnology, and thus, regions should try to attract firms interested in these two areas. “Nanoparks” are also more concentrated on urbanized regions than in medium/rural regions. Finally, they found out that nanomaterial productions seems to be more dependent on human capital and specialization than in size.


I will begin by saying that I think this is a very complicated but good research. Complicated in the sense that I can barely understand the terms and the methodologies used – I had to do further research about those. It is also good in the sense that the topic is current – nanotechnology. I do agree that nanotechnology is still an emerging field but I believe that it will be of great importance in the near future and that regions should really try to invest in this technology.

When it comes to the methods used in the study, there were complicated formulas that were used. However, I’d have to say that the methodology was written well and very detailed. In the data analysis part, the explanations were a bit confusing because some of the sentences are so long that I tend to lose the thought in the middle of reading them ( Very Happy ). But, the good thing here is they provided illustrations of the results of the spatial distribution of nanomaterial patent applications. The figures helped me in understanding what the text was trying to say.

As for the formatting, the paper followed the basic format: introduction, methodology, data analysis, and conclusion. The statistical tables, I also noticed, followed the APA format and I must say that they looked organized even with the numerous data they presented. Finally, citations were found all over the paper. Whenever the authors used ideas from other authors, they always ended the statement/s with the citations.

The only negative thing I found about the paper is the complexity of the terms used. But hey, it may just only be me. Very Happy

Thermoelectric and Solar Energy
Chris Gould and Noel Shammas
Staffordshire University, Faculty of Computing, Engineering and Technology
Beacon Building, Beaconside, Stafford, ST18 0AD, United Kingdom


This paper was in some way inspired by issues surrounding climate change. Solar energy, along with wind, biomass, wave and geothermal energy, is considered as a renewable form of energy. Thermoelectric energy can be used in power generation, cooling and refrigeration.

Back in the 1950s and until today, bismuth telluride was used as a material in energy production because of its thermoelectric effects at lower temperatures. Many other ways to improve the competitiveness of thermoelectric materials were developed. This includes increasing the electrical power factor, decreasing cost, developing environmentally friendly materials, and the use of functionally graded materials.

One issue in the use of thermoelectric materials is their availability. There are four main raw materials that can be used to generate thermoelectric energy but they are rare and may be toxic. There is an alternative, however, that is highly abundant but it has lower thermoelectric figure-of-merit (ZT).

Focusing now on solar energy, the materials for production are usually classified into first generation, second generation and third generation technologies. Third generation technologies aid in the development of future photovoltaic (solar) materials.

The paper recommended further research about finding alternative thermoelectric materials that are more abundant but cheaper than existing materials. It also recommended to make the ZT values of thermoelectric nanotechnology into practical devices. Furthermore, future researches are encouraged to look into nanotechnology as a tool in developing thermoelectric materials.


The paper was more of a historical type of research rather than a technical research because they did not perform any experimentation and relied majorly on their literature. Despite the paper lacking in methodology, I think that it is still acceptable because of the sufficient literature that they provided. I also found the topic interesting because of its currency and relation to our CS Research Methods theme.

As for the format of the paper, there were only two major parts: related literature and conclusion. There were no other parts of a “standard” paper but I think this has something to do with it being a historical paper. The conclusion and recommendation was drawn from the literature review they conducted and I think that the ideas were properly organized. The paper cited both the advantages and disadvantages of using thermoelectric and solar energy and I think that that is the result of a critical analysis of their literature. For me, this paper clearly demonstrates how important it is to gather, read and critically analyze the related literature of a study.
Back to top Go down
View user profile http://katemarieldizon.blogspot.com/

Posts : 44
Points : 50
Join date : 2009-06-23
Age : 27

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Thu Jul 16, 2009 7:49 pm

i hope this make sense....scratch scratch

Scientific Research 1

Virus Bulletin 2010: A Retrospective
Steve R. White
IBM Thomas J. Watson Research Center


The paper assesses the most important viral disasters in the past ten years to show how they could have been foreseen and avoided from these technology trends. Mr. White shows an abstract on how antivirus companies take actions. Some sited virus problems are internet based spread, administrative overhead, rapid epidemics, complex viruses, and small devices but there were already emerging solutions to address this virus problem.
It has been said that there are four technology trends that were responsible for significant changes in the computing environment, which formed scenery for the virus problem. These are the (1) pervasive computing devices, (2) the decline of Moore’s Law that resulted to falling chip prices, (3) broadband access to the Internet, and (4) the rise of the e-commerce.

The paper attempts to take a humorous look at what might happen in the next ten years of the anti-virus field by simply looking back at the last ten years in the antivirus industry, from the new millennium until the next year 2010. The author imagines a simple view of the future where antivirus industries have been working on for solutions which effects to a year without major virus incidents and no overblown virus hoaxes.


Upon reading the entire paper, I was impressed by the way the author presented his paper. Although it was more on a simple review on the histories of viruses for the past years, it is still clearly stated in which it will be easier for the reader to read and understand. I also admire the author since he has a positive vision of what might be the possible things to happen in the year 2010.

In regards to the way it is presented, by just reading the abstract itself, you can easily comprehend what’s the paper is all about. But I don’t think that introducing your name is included in the introduction since I haven’t tried of it yet. The language used in the paper are easy to understand, that even if a person reading is not quite familiar with the concept may understand what the writer is trying to explain.


Scientific Research 2

Can Cryptography Prevent Computer Viruses?
John F. Morar, David M. Chess
IBM Thomas J. Watson Research Center
Hawthorne, NY, USA


Cryptography is one essential aspect to secure communications. There are lots of aspects to security, ranging from secure commerce and protecting passwords but cryptography can also prevent computer viruses. The paper will provide an overview of the ways that encryption technology imposes on virus protection and related security efforts, and provide some understanding of how encryption can help.

It is also included how viruses use encryption in order to spread faster with more protections for them to be more difficult to detect and analyze and the ways encryption can be very and useless in some cases for it can make virus protection more difficult. Cryptography can also be a barrier to effective virus prevention. There are a number of situation in which encryption of potentially infected data prevents the data from being examined for the presence of viruses. In particular, whenever encryption has been used to restrict the ability to read a dataset to some set of entities, and the entity attempting to check the dataset for viruses is not in that set, the encryption will prevent the virus check.

It is also briefly stated the use of cryptography in viruses, in antivirus software, and in general security systems. In some cases, he authors believe that cryptography will play an important role in the way our systems are secured in the future, both against viruses and against the more general class of emerging threats.


Cryptography is a particularly interesting field because of the amount of work that is done in secret. I now understand why there are a continuous flow of experts from other fields who offer cryptographic variations of ideas. There are in fact many extremely intelligent and well educated people with wide-ranging scientific interest who are active in this field. With the title alone, it can really awaken the interest of many people. It is clearly stated that even if cryptography is needed for secure communications, it is not by itself sufficient.

With regards to the format, it is not that awe-inspiring to look at. Perhaps it’s because it is not written the way it should be, in a professional and presentable format instead the text is just written in a simple manner. on the other hand, the main finding of the study is clearly stated and I found it impressive and informative, enough for me to understand what’s bugging on my mind.


Scientific Research 3

Survivability: Protecting Your Critical Systems
Robert J. Ellison, David A. Fisher, Richard C. Linger, Howard F. Lipson
Thomas A. Longstaff, Nancy R. Mead
CERT® Coordination Center
Software Engineering Institute, Carnegie Mellon University
Pittsburgh, PA 15213-3890


Nowadays internet use is growing progressively more than the price of gasoline and the society has also increased growth dependence on it. Internet is one good example of a highly distributed system that operates in unbounded network environments and it is known that it has no integrated security policy. The paper describes the survivability approach to guarantee that systems that operate in an unbounded network is dynamic in the presence of attack and will survive attacks that will result in successful interruptions. It also included in the paper some discussions of survivability as an integrated engineering framework, the current state of survivability practice, the specification of survivability requirements, the strategies for achieving survivability, and some survivability solutions.

The paper also talks about the capability of a survivable system to fulfill its mission in a timely manner is thus linked to its ability to deliver essential services in the presence of an attack, accident or failure. It is stated that a system is prone to attacks because internet itself has no central administrative control causing unauthorized persons to access a system.


It is been clearly stated in the paper that even hardened systems can and will be broken. Thus survivability solutions should be incorporated into both new and existing system to help them avoid the potentially devastating effects of compromise and failure due to attack. I find it very helpful especially in organizations in order to fully understand things with regards to protecting a system.

The paper explains everything. It is well presented enough for the reader to comprehend and it is doesn’t strain you eyes because the text are well presented.


cheers cheers cheers cheers
Back to top Go down
View user profile http://authenticallyblack.blogspot.com/
hannah rhea hernandez

Posts : 27
Points : 35
Join date : 2009-06-19
Age : 28
Location : Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Fri Jul 17, 2009 4:45 pm

Kevin Crowston and James Howison
School of Information Studies, Syracuse University


What do we really know about the communication patterns of FLOSS projects? How generalizable are the projects that have been studied? Is there consistency across FLOSS projects?

These are the questions this paper responds to. Questioning the assumption of distinctiveness is important because practitioner-advocates from within the FLOSS community rely on features of social structure to describe and account for some of the advantages of FLOSS production.

To address these questions, this study examined 120 project teams from SourceForge, representing a wide range of FLOSS project types, for their communications centralization as revealed in the interactions in the bug tracking system. It was found that FLOSS development teams vary widely in their communications centralization, from projects completely centered on one developer to projects that are highly decentralized and exhibit a distributed pattern of conversation between developers and active users.

And therefore, it was suggested that it is wrong to assume that FLOSS projects are distinguished by a particular social structure merely because they are FLOSS. Our findings suggest that FLOSS projects might have to work hard to achieve the expected development advantages which have been assumed to flow from “going open.” In addition, the variation in communications structure across projects means that communications centralization is useful for comparisons between FLOSS teams. The social structure of Free and Open Source software development 2 found that larger FLOSS teams tend to have more decentralized communication patterns, a finding that suggests interesting avenues for further research examining, for example, the relationship between communications structure and code modularity.

The topic being raised is really interesting (since I can relate to it better than the other researches I found). At its conclusion it really did prove the points raised but the downside for me in this study is its methodology. Though indeed the results are gathered with evidence or supporting data this paper mostly is done by researching topics directly and indirectly related to it. The methodology only consists of gathering data and queueing them up and no other strategies were done (as I see it). As Kate quoted it, "The paper was more of a historical type of research rather than a technical research because they did not perform any experimentation and relied majorly on their literature."

With regards to the format, there were only two major parts: related literature and conclusion. The other parts of a “standard” paper were indeed missing and thus it can e understood that the conclusion and recommendation was drawn from the literature review.

However, I think this research still is effective since it provided sufficient data/ literature to support the study.

by: Mahmoud Refaat Nasr


This thesis explores the reasons behind the poor level of adoption of open source web GIS software, and whether it is due to poor awareness about open source concepts or due to technical deficiencies in the open source tools. The research was done in 2 major phases; the first phase involved conducting surveys to measure the awareness and attitudes towards open source. The surveys examined three categories of people involved in the IT industry, namely: decision makers, software developers, and end users. The measurement of awareness was done by developing an Awareness Indicator and a Sentiment Indicator for each category. These indicators were developed by the author during the course of the study in order to provide a measurable and descriptive indication of the results. The second phase involved performing a comparative analysis between MapServer a leading open source web GIS tool, and three of the leading proprietary web GIS software, namely: ESRI’s ArcIMS, Intergraph’s GeoMedia WebMap, and MapInfo’s MapXtreme. The results of the research provide an insight on how different categories of people view open source, and demonstrate that lack of awareness about open source concepts and its competencies may be a major reason behind the poor adoption of open source solutions. The results of the comparative analysis also demonstrate that MapServer is technically equivalent to its commercial counter parts.


All I can say is that this is a very good research paper. Maybe its because I'm still inexperienced in judging the research work but I can't find I fault or major ones anyway in this paper. Upon reading this paper, you can see how detailed the framework is, how in depth the research has gone and the structure of the paper is complete and done nicely. Researching histories and related topics were complete with regards to the points the researcher raised in his objectives and also survey done in terms of testing the hypothesis of the study was also shown and detailed in the paper.

Though the survey done was fine as a suggestion, it would be better if he added observation and interview of organizations who were already using open source GIS standards as to support his survey findings. Format wise its done throughly and
But overall, its a pretty much detailed and good research paper.

Kevin Crowston, Kangning Wei, Qing Li,
U. Yeliz Eseryel, and James Howison


The apparent success of free/libre open source software (FLOSS) development projects such as Linux, Apache,
and many others has raised the question, what lessons from FLOSS development can be transferred to mainstream software development? In this paper, they used the coordination theory to analyze coordination mechanisms in FLOSS development and compare our analysis with existing literature on coordination in proprietary software development. The researchers examined developer interaction data from three active and successful FLOSS projects and used content analysis to identify the coordination mechanisms used by the participants. The researchers found that there were similarities between the FLOSS groups and the reported practices of the proprietary project in the coordination mechanisms used to manage task-task dependencies. However, it found clear differences in the coordination mechanisms used to manage task-actor dependencies. While published descriptions of proprietary software development involved an elaborate system to locate the developer who owned the relevant piece of code, The researchers found that “self-assignment” was the most common mechanism across three FLOSS projects.

This coordination mechanism is consistent with expectations for distributed and largely volunteer teams. We conclude by discussing whether these emergent practices can be usefully transferred to mainstream practice and indicating directions for future research.


Honestly, reading these research papers was easy but understanding them is a hard so most of the time I just skim through a whole section and judge it by its format and methodology. As I have observed, the drawbacks of the paper is just like the first paper I commented on. Though indeed the results are gathered with evidence or supporting data this paper mostly is done by researching topics directly and indirectly related to it. The methodology only consists of gathering data and queueing them up and no other strategies were done (as I see it). As Kate quoted it, "The paper was more of a historical type of research rather than a technical research because they did not perform any experimentation and relied majorly on their literature."

With regards to the format, there were only two major parts: related literature and conclusion. The other parts of a “standard” paper were indeed missing and thus it can e understood that the conclusion and recommendation was drawn from the literature review.

As a suggestion, using other data collection techniques to combine with the data or literature collected would be better.

Back to top Go down
View user profile http://woophie.blogspot.com
Karren D. Adarna

Posts : 31
Points : 33
Join date : 2009-06-20

PostSubject: assignment_2   Tue Jul 21, 2009 4:58 pm

Managing Power Consumption and Performance of Computing Systems Using Reinforcement Learning
G. Tesauro, R. Das
H. Chan, J. O. Kephar,
C. Lefurgy, D. W. Levine and F. Rawson
IBM Research


This paper addressed a reinforcement learning approach to simultaneous online management of both performance and power consumption. Electrical power management in large-scale IT systems such as commercial datacenters is an application area of rapidly growing interest from both an economic and ecological perspective because many companies and business organizations want to save power without sacrificing performance. Energy consumption is a major and growing concern throughout the IT industry as well as for customers and for government regulators concerned with energy and environmental matters. This research tackles intelligent power control of processors, memory chips and whole systems, using technologies such as processor throttling, frequency and voltage manipulation. It presents a reinforcement learning approach to developing effective control policies for real-time management of power consumption in application servers. Such power management policies must make intelligent tradeoffs between power and performance, as running servers in low power modes inevitably degrades the application performance. The researchers designed a multi criteria objective function or utility function that takes both power and performance into account, and using the utility function as a reward signal in reinforcement learning. A high-level overview of experimental test bed is presented. The researchers apply reinforcement learning (RL) in a realistic laboratory test bed using a Blade cluster and dynamically varying HTTP workload running on a commercial web applications middleware platform. They embed a CPU frequency controller in the Blade servers’ firmware and train policies for this controller using a multi-criteria reward signal depending on both application performance and CPU power consumption.
The testbed scenario posed a number of challenges to successful use of RL, including multiple disparate reward functions, limited decision sampling rates, and pathologies arising when using multiple sensor readings as state variables. The data collection approach integrates several different data sources to provide a single, consistent view. Several dozen performancemetrics, such as mean response time, queue length and number of CPU cycles per transaction, are collected by the WXD data server, a daemon running on WXD’s deployment manager. They also run local daemons on each blade to provide CPU utilization per blade, and current CPU frequency, taking into accounts both the true frequency and any effects due to the current level of processor throttling.

This paper presented a successful application of batch RL combined with nonlinear function approximation in a new and challenging domain of autonomic management of power and performance in web application servers. We addressed challenges arising both from operating in real hardware, and from limitations imposed by interoperating with commercial middleware. By training on data from a simple random-walk initial policy, we achieved high-quality management policies that outperformed the best available hand-crafted policy. Such policies save more than 10%on server power while keeping performance close to a desired target.


As I go reading this research paper, I haven’t really noticed any wrong. As the way I am seeing it, the researchers fully demonstrated everything and are able to present necessary data, documents and even graphical representations. The format is well arranged from introduction to its results and findings and the contents and substances are very specific. They are able to come with a very well made research; it is evident that they have gone through hard researchers also just to come up with their own research.

An Undetectable Computer Virus
David M. Chess and Steve R. White
WhiteIBM Thomas J. Watson Research Center
Hawthorne, New York, USA


One of the few solid theoretical results in the study of computer viruses is Cohen's 1987 demonstration that there is no algorithm that can perfectly detect all possible viruses. This brief paper adds to the bad news, by pointing out that there are computer viruses which no algorithm can detect, even under a somewhat more liberal definition of detection. The researchers also comment on the senses of "detect" used in these results, and noted that the immediate impact of these results on computer virus detection in the real world is small. They started by defining a computer virus as a viral set. A program is said to be infected simplicity when there is some viral set of which it is a member. A program which is an instance of some virus is said to spread whenever it produces another instance of that virus. The simplest virus is a viral set that contains exactly one program, where that program simply produces itself. Larger sets represent polymorphic viruses, which have a number of different possible forms, all of which eventually produce all the others.

In detecting a virus for the purposes of the paper, they made an algorithm A that detects a virus V if and only if for every program p, A(p) terminates, and returns "true" if and only if p is infected with V. This is essentially called as Cohen's definition. A very similar example demonstrates that there are viruses for which no error-free detection algorithm exists. That is, not only can we not write a program that detects all viruses known and unknown with no false positives, but in addition there are some viruses for which, even when we have a sample of the virus in hand and have analyzed it completely, we cannot write a program that detects just that particular virus with no false positives. . Every widely-deployed virus detection program in use today will claim to find a virus in at least some non-viral objects (a false positive), because the methods used for detection are approximate, based on the presence of a particular binary string in a certain place, on the calculation of the finite-size checksum of a macro, on a certain pattern of changes to a file, and so on. Producers of anti-virus software of course try to minimize the number of actual non-viral programs that are falsely detected. But no one worries about the fact that the algorithms used to detect viruses produce false positives on an enormous number of non-viral objects that have never been, and will never be, present on any actual user's computer. This paper's title, then, is deliberately somewhat provocative: while the viruses that are presented here are undetectable in the strict formal sense of the term, there is no reason to think that it is impossible to write a program that would detect them sufficiently well for all practical purposes.


This is somehow smaller than the previous research that I read and it talks about the undetectable computer virus wherein the researchers wanted to let us know that according to study, there are really some viruses that cannot be detected. Some may have detected a virus but it could be possible that their algorithm is incorrect. As for the presentation of the research, it is presented in an accurate format by starting in definition of some terms and of course introduction of problems. Throughout the papers, formulas and algorithms are well presented to make the readers believe that what they want to imply is convincing. They provide algorithms in detecting viruses and how it is that sometimes, viruses are not really detected although we have thought it already was. The contents are definite and detailed.

Computers and Epidemiology
Jeffrey O. Kephart, David M. Chess, Steve R. White
High Integrity Computing Laboratory
IBM Thomas J. Watson Research Center


The researchers started this research paper by the two levels of behavior of computer viruses: microscopic and macroscopic. The micro level is the focus of hundreds of researchers who dissect and try to kill off the dozens of new viruses written every month while the macro view of computer viruses has insulated. The situation is being remedied in two ways: by the collection of statistics from actual incidents, and by computer simulation of virus spread. This epidemiological approach--characterizing viral invasions at the macro level--has led to some insights and tools that may help society to cope better with the threat. Today, computer virus epidemiology is an emerging science that reveals that protective measures are definitely within reach of individuals and organizations. Biologists have combined the micro- and macroscopic perspectives on disease to good effect. It turns out that biological diseases and computer viruses spread in closely analogous ways, so that each field can benefit from the insights of the other. One of the simplifications worth borrowing from the biologists is to regard individuals within a population -- in this case, computers and associated hard disks, diskettes, and other storage media -- as being in one of a few discrete states, such as "susceptible" or "infected. In epidemiological language, pairs of individuals have "adequate contact" with each other whenever one would have transmitted a disease to the other if the first had been infected and the second had been susceptible. Virus incident statistics collected from the sample population are very revealing about how quickly viruses are spreading in the real world and how prevalent they have become. It so happens that the number of infected PCs in the world is roughly proportional to the number of incidents observed in the sample population. Complete eradication of all viruses is impossible as long as there are malicious programmers. Combining microscopic and macroscopic solutions, however, holds out the hope of reducing the problem to the nuisance level.


I think the researchers are successful in informing the readers about the similarity of viruses in human and in computers. Their contents are definite and specific and the format is very well arranged too. They gave illustrations that made the readers understand it more clearly. Like the first two researches that I have read, it is done in a very concise way.

Back to top Go down
View user profile
Esalle Joy Jabines

Posts : 16
Points : 16
Join date : 2009-06-23

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Wed Jul 22, 2009 2:57 am

Novel Nano-organisms from Australian Sandstone

Philippa J.R. Uwins, Richard I. Webb, and Anthony P.Taylor

Centre for Microscopy and Microanalysis, The University of Queensland , St. Lucia, Queensland, Australia 4072
Department of Microbiology, The University of Queensland, St. Lucia, Queensland, Australia 4072


Nanobes have cellular structures that are strikingly similar in morphology to Actinomycetes and fungi (spores, filaments, and fruiting bodies) with the exception that they are up to 10 times smaller in diameter. Nanobes show a positive reaction to three DNA stains which strongly suggests that nanobes contain DNA.

This paper describes various organic features that were observed as unusual growths on sandstone samples and other substrates. This paper also documents their morphology, elemental composition, and structural detail. Sandstone samples with observed in situ nanobe growths were used in the study. Scanning electron microscopy, Transmission elctron microscopy, Energy dipersive X-ray spectroscopy, and DAPI, Acridine Orange , and Feulgen staining for DNA were the methods used in the study.

As a result of the study, there were many properties of the nano-organisms that support their thesis that nanobes are biological structures based on the methods they have performed. The authors of the paper presented seven properties of nanobes which were results of the different methods and analyses done on the samples that will show that nanobes or nano-organisms are biological structures.


There were terms in this paper that were a bit hard to understand. But, as to the organization of the contents of the study, the paper shows a clear purpose of their study which was to examine that nanobes are biological structures although the standard format for the research paper, as we have discussed in class, cannot be noticed in this paper. The methods that were used in testing their samples were presented in a way that it can be understandable by readers. After showing the techniques performed on the said samples, they also presented a brief discussion which showed their analyses on the samples. They ended the paper with a conclusion that supported their thesis. Nice thing about their conclusion is that the authors also presented the other side which was if nanobes not being biological structures. Based on their conclusion, they had properly obtained evidences on the existence of the nanobes or nano-organisms as biological structures.

Where There's Smoke, There's Mirrors: The Truth about Trojan Horses on the Internet

Sarah Gordon, David M. Chess

IBM TJ Watson Research Center
Yorktown Heights, NY


Trojan horses are programs purposefully damage a user's system upon their invocation. They almost always shoot to disable hard disks, although they can, in rare cases, destroy other equipment too. This paper examined the prevalence, technical structure and impact of non-viral malicious code ("Trojan horses") on the Internet, and its relevance to the corporate and home user.

Throughout computing history, we can find references to Trojan horses. In the late 1980's, FidoNet bulletin boards were popular places for computer users to gather and engage in various forms of communication. Files were also available for download. As users downloaded programs, they sometimes came across programs that claimed to do one thing, but which actually did another. Someone came up with the idea that it might be a good idea to document the existence of these programs and warn users. Out of this need and idea, The Dirty Dozen was born. The Dirty Dozen is a list that was established to provide warnings about the most common Trojans and bombs. The list included the filename, description of what they program is supposed to do, followed by what the program actually does. Many Trojan horses appeared which were also examined by the different anti-virus firms. Determining if a program was really a Trojan horse was a big problem.

Using user simulations and first-hand reports provided by real users focusing on the type and scope of actual Trojan threats encountered on the Internet were done in course of the study. The status of hostile active content, including Java and ActiveX, on the Internet, its impact in the real world and strategies for minimizing the risk of damage from Trojan horses on the Internet were also presented. These preventive measures presented were drawn from the results of the simulations and the reports coming from the real users were also used in the conclusion.


This paper was a bit long to read since it really discussed the history of Trojan horses. But that was helpful in the sense that it introduced beforehand what really a Trojan horse was. As to the format of the paper, I think the standard format of a research paper is unnoticeable although it had the introduction and conclusion. I think the history presented was the related literature. As what I have noticed while reading the paper, if you’ll not read the whole content of the paper, you will not know what were the methods used and where in the paper can you actually see those things. Unlike the previous paper I have read, you can easily recognize the organization of the information presented. Some parts of the paper were also not that clear. I think it’s because of the different titles the author put which, if a reader will not actually read and understand that part of the paper, the reader will not know the relevance or the connection of the title to the content. But, as to the information presented, I think the study was helpful for victims of the Trojan horses as well as its future victims.

An Environment for Controlled Worm Replication and Analysis
or: Internet-inna-Box

Ian Whalley

Bill Arnold, David Chess, John Morar, Alla Segal, Morton Swimmer
IBM TJ Watson Research Center, PO Box 704, Yorktown Heights, NY 10598, USA


A worm is a program that distributes multiple copies of itself within a system or across a distributed system. In order to understand the requirements of a worm replication system, the author presented a brief history of worms and its properties.

So-called 'worms' have been a feature of the malware landscape since the beginning, and yet have been largely ignored by anti-virus companies until comparatively recently. However, the near-complete connectivity of computers in today's western world, coupled with the largely Win32-centric base of installed operating systems make the rise of worms inevitable.

The author described techniques and mechanisms for constructing and utilizing an environment enabling the automatic examination of worms and network-aware viruses. The paper is not intended to be a discussion of the Immune System concept. Instead, the intent is to describe an approach that has been applied to the problem with some measure of success.

The approach involves building a virtual SOHO network, which is in turn connected to a virtual Internet. Both the virtual LAN and WAN are populated with virtual machines. The suspected worm is introduced into this environment, and executed therein. The whole system is closely monitored as execution progresses in the isolated environment, and data is amassed describing what the suspected worm did as it executed. This data is then processed by the system in an attempt to automatically determine whether or not the suspect programming is performing actions indicative of a worm or internet-aware malware. In this paper, an outline of a functional prototype of a worm replication system was also presented.


I think, if I’ not mistaken, this paper is a good example of a technical research. This paper introduced a way of controlling worm replication on networks. Although the author admitted that the development of this study is not either at, or nearly at, I think this paper is really a good start for people to develop one like this. I was not bored reading this paper although it was long enough. The author presented different things in relation to the study but everything was done concisely. The manner of the discussion in each section was direct to the point. As to the format of the paper, I think the standard format I’ve known was still not noticeable just like the other papers I’ve read. Considering the organization of data in the paper, everything was clear because every section of the paper was properly presented. Everything was concise but still understandable for me. To sum up, the sections of the paper was concisely arranged but in a way that it can easily be understood by readers.
Back to top Go down
View user profile
angel mae b. brua

Posts : 38
Points : 46
Join date : 2009-06-23
Age : 28
Location : Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Wed Jul 22, 2009 12:58 pm

The One-Sender-Multiple-Receiver Technique and Downlink
Packet Scheduling in Wireless LANs
Zhenghao Zhang, Steven Bronson, Jin Xie and Hu Wei
Computer Science Department
Florida State University Tallahassee, FL 32306, USA


In this paper, the researcher studied about the one-sender-multiple receiver transmission technique. It allows a sender to send multiple receivers on the same frequency simultaneously by utilizing multiple antennas at the sender. For the researchers, the OSMR has the potential to significantly improve the downlink performance of wireless LANs, because of this; they can send distinct packets to multiple computers at the same time. They studied this research in Florida State University and done lots of experiments. Since this research topic is technical, they gathered data in done lots of statistical solutions. They focused on the problem maximizing network throughout and did propose a simple algorithm. The algorithm is simple and did suitable for the implementation with inexpensive processors.


The abstract of the study is only one paragraph and with few words only but it directs to the point of what the study is all about. Since that the study is more on very technicality, and the statistics formulas is very broad to understand, it was defined very well. They had a graphical solution, formulas, and experiments. Though they never mentioned it, I mean the methodology they used.

The whole paper was kind of confusing stuff for me. Like what I said during my sharing time, research paper don’t have specific format, but on this paper, what I have discussed is not really the same. What I noticed is that every experimentation they done, they put it in another title. Then, at the final part was their conclusion. The conclusion is somehow similar to the abstract. Since they were really at it to solve or to create solutions which are stated in the abstract.

Predictux: A Framework for Predicting Linux
Kernel Incremental Release Times
Subhajit Datta Robert van Engelen Andy Wang
Department of CS Department of CS Department of CS
Florida State University Florida State University Florida State University
Tallahassee, FL 32306-4530 Tallahassee, FL 32306-4530 Tallahassee, FL 32306-4530


Predictux, is a decision tree based framework for predicting how many days the next Linux kernel version will take to be released, based on analyzing some parameters of its past releases. In other words, this study focuses on the prediction of Linux date time to be next released. The word is released is being used to mean a subset of a software system’s functionality that is released to users for testing, use, and feedback. The research they have conducted is to examine the time a Linux kernel is to be released and within that time is there a chance that the software is reliable. With this, they did gather lots of factors to determine the time. Since that it determining the time, it requires lots of factors then. They used the decision tree to ease the understanding and interpreting the time. They did experimental validation as well as open issues and future works.


The abstract was just a glimpse of what the study is all about. The problem was being discussed in the introduction and motivation. There were only 5 chapters being discussed in this topic: The introduction, the framework, the validation of experiments, the open issues and future works, and the conclusion. What they discussed is just a brief summary of what they conducted. Actually, it is simple compare to the first research topic I have listed. And it is easier to understand though they still use statistics on it. The topic was really understandable. And on their conclusion, they conclude the answer to their problem. Though at first it was just hypnosis, at the end, after experimentation and statistical methods they done, they come up to a certain answer.

Project-entropy: A Metric to Understand Resource
Allocation Dynamics across Software Projects
Subhajit Datta Robert van Engelen
Department of CS Department of CS
Florida State University Florida State University
Tallahassee, FL 32306-4530 Tallahassee, FL 32306-4530


In this paper, it introduced and illustrates the use of the project entropy metric to understand the dynamics of allocating resources across software projects. They also forwarded a hypothesis regarding the limit to which resource reallocation to enhance user satisfaction and outlined plans for further empirical validation of their ideas. In this research, the project entropy is a formula to testify the percentage of the user to accept particular software after its release. They still have hypnosis and hypothesis in gathering and formulating solutions. Meaning, they undergo simulation.


It still has the same kind of paper on the previous research paper. Since that they have the same researcher. Like above, it provides the certain solution or answer based on their experimentation and simulation. Then the chapters included in this research also has 5 chapters.

Back to top Go down
View user profile http://www.gelaneam.blogspot.com
George Dan Gil

Posts : 30
Points : 34
Join date : 2009-06-23

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Sat Jul 25, 2009 4:38 pm

Green computing: IBM introduces new energy management software
By Manufacturing Business Technology Staff –
Manufacturing Business Technology,
5/26/2008 9:04:00 PM

As part of IBM’s Project Big Green, they have announced new software developed in order to help costumers in maximizing the energy efficiency and reduced cost associated with power and cooling. This latest version of IBM Tivoli Monitoring (ITM) software combines views of energy management information that enable optimization across data centers and facilities infrastructures. Monitoring capabilities offer customers the ability to understand energy usage and alert data center managers to potential energy-related problems and take preventive action. Historical trending and forecasting capabilities enable greater precision in existing environments and energy planning. Autonomic capabilities allow customers to set power and utilization thresholds to help control energy usage. The new software can also help customers handle physical constraints in the data center relating to space, power, and cooling.

This new software of IBM provide not just in data centers but also in non-IT assets such as air conditioning equipment, power distribution units, lighting, and security systems.

IBM will join forces with nine partners to offer IBM's IT management expertise with solutions that will allow customers to monitor and control energy consumption across their enterprise to help reduce power consumption and energy costs and better maintain service levels. The partners include:

Reason for editing

• APC and TAC by Schneider Electric:
• Eaton Corporation:
• Emerson Network Power
• Johnson Controls, Inc.
• Matrikon:
• OSIsoft:
• Siemens Building Technologies:
• SynapSense Corporation
• VMware:


The IBM’s Project Big Green is an amazing project that will surely help in maximizing energy competence. Since this software the project has announced is not only just for IT assets but including non-IT assets it will be a big contribution towards earth. Maybe we can contribute also to the earth just like this one by providing solutions to the existing problems through research. And I think this will be a though one, but providing this article as an inspiration maybe we could somehow be like them. Lets help the earth!

Making Money with Articles: Niche Websites
By: Jo Han Mok

By choosing a good place subject to base your website is one of the most important aspects of making money off of your articles. You should take each one of these keywords and use it for the basis of one article on each page. This way, even though you are targeting one specific subject, you will be sure to interest a wide variety of people in that one niche.

The best way to find keywords for your subject is to use a keyword software program. This will generate a list of keywords or phrases that contain your place and will also show you approximately how many people search for each word or phrase. By this you can recognize the articles on which most people preferred. If there are a number of topics that you like, pick the one that you feel would be easiest to start with and then, once that site is built and generating some revenue, you can start another site.

You are never limited in what you can do with niche website marketing, unless you find out that you do not have the marketing skills or the needed funding to make it happen. Otherwise, the sky is the limit!
About the author:
Jo Han Mok is the author of the #1 international business bestseller, The E-Code. He shares his amazing blueprint for creating million dollar internet businesses at: www.InternetMillionaireBlueprints.com

This article is an easy step in making profit through online articles. This will also be important because aside from earning money you will have more learning through research articles and items.

Before You Call a Web Developer, Ask Yourself One Question
By: Susan Daffron

Because we develop Web sites, not surprisingly, the first words we often hear from people are: "I need a Web site." My response is often "why?" The answer to that question can be quite telling. I can almost guarantee that you won't end up with a good Web site if you don't even know why you need one in the first place.

Lots of people waste their time and money on useless websites. The thing is that the website you will be developing should be well treated like business or marketing expenditure. For example, suppose you sell dog treats. You spend a bunch of money printing a brochure that explains why your dog treats are healthier or tastier than the ones at the grocery store. The goal for that brochure is to give people information on all the fabulous benefits of your special dog treats.

In much the same way, your Web site might explain why your dog treats are great. In fact, it might be nothing more than an "online brochure" with a lot of the same information as the paper one. That's a reasonable goal for a new site.

For reasons people go online to find information, to be entertained, or to buy stuff. If your site lets people do one or more of these things, it has a reason to exist. However, unlike your paper brochure, a Web site has only about four seconds to get your message across (according to a recent report from Akamai and Jupiter Research). If you have no clue what information people are supposed to glean from your Web site, neither will your site visitors. Four seconds later, they're gone and they probably won't return.

Your goal should be connected to your business which is the purpose of your website.
When setting Web site goals, it makes sense to think about the visitors you are hoping to attract to the site. Who will be reading it? What do they need to know? Why would they visit your site in the first place? What terms would they type into a search engine to find your site? If you don't have good answers for these questions, you should reconsider the question I asked at the beginning of this article: "Why do you need a Web site?"

Not every business needs a Web site. You know your business better than anyone, so before you pick up the phone to call a Web designer, think about what you want your Web site to do for you and why.
About the author:
Susan Daffron is the President of Logical Expressions, Inc. (www.logicalexpressions.com) and the author of books on pets, web business, computing, and vegetarian cooking. Visit www.publishize.com to receive a complimentary Publishize podcast or newsletter and bonus report."

This is article, should be fitting to those who wants to have their own websites. This will make them realize how important to know the goals and objectives of your upcoming site. Not just that you may have to know whether it is really important to have a website for that agenda.
Back to top Go down
View user profile
Mary Rossini Diamante

Posts : 15
Points : 17
Join date : 2009-06-27
Age : 27

PostSubject: Assignment 1   Sat Jul 25, 2009 9:12 pm

Which database is more secure?
Oracle vs. Microsoft

David Litchfield [davidl@ngssoftware.com]


The paper examined the differences between the security postures of Microsoft’s SQL Server and Oracle’s RDBMS based upon flaws reported by external security researchers. Only flaws affecting the database server software itself have been considered in compiling this data. A general comparison is made covering Oracle 8, 9 and 10 against SQL Server 7, 2000 and 2005.

The number of security flaws in the Oracle and Microsoft database servers that have been discovered and fixed since December 2000 until November 2006. Graphs indicate flaws that have been discovered by external security researchers in both vendors’ flagship database products – namely Oracle 10g Release 2 and SQL Server 2005. No security flaws have been announced for SQL Server 2005. It is immediately apparent from the result graphs that Microsoft SQL Server has a stronger security posture than the Oracle RDBMS. The conclusion is clear that if security robustness and a high degree of assurance are concerns when looking to purchase database server software, given the results one should not be looking at Oracle as a serious contender.


In my standpoint, I believe having conducted such research is of assistance to users of database in opting which database is more functional in terms of security. Comparison of two particular databases’ security stance could provide acquaintance and information of how these databases perform security. Assessing the paper’s format and flow of study, I could say that it was more of a statistical study. I am not certain on how the results and data are acquired. The fact is, the paper did not provide proper definition of its methodology as well as its abstract. However, regardless of that issue, the study is a competent research. The study is definitely of great significance and contribution to the concerned database users but I would like to suggest that further enhancement on the construction of the paper should be practiced.

Analysis of an Electronic Voting System
Tadayoshi Kohno, Adam Stubblefielf, Aviel D. Rubin, Dan S. Wallach
February 2004


The study is concerned with U.S. federal adopting paperless electronic voting systems. Analysis showed that this voting system is far below even the most minimal security standards applicable in other contexts. Researchers identify several problems including unauthorized privilege escalation, incorrect use of cryptography, vulnerabilities to network threats, and poor software development processes. The most fundamental problem with such a voting system is that the entire election hinges on the correctness, robustness, and security of the software within the voting terminal. They concluded that the voting system is unsuitable for use in a general election. Any paperless electronic voting system might suffer similar flaws, despite any “certification” it could have otherwise received.
Using publicly available source code, an analysis was performed of the April 2002 snapshot of Diebold’s
AccuVote-TS 4.3.1 electronic voting system. Significant security flaws were found. Based on analysis of the development environment, including change logs and comments, an appropriate level of programming discipline for a project was not maintained. There appears to have been little quality control in the process. The model where individual vendors write proprietary code to run elections appears to be unreliable, and if the process of designing the voting systems is not changed, there will have no confidence that the election results will reflect the will of the electorate.

On the other hand, an open process would result in more careful development, as more scientists, software engineers, political activists, and others who value their democracy would be paying attention to the quality of the software that is used for their elections. Alternatively, security models such as the voter-verified audit trail allow for electronic voting systems that produce a paper trail that can be seen and verified by a voter. In such a system, the correctness burden on the voting terminal’s code is significantly less as voters can see and verify a physical object that describes their vote. They suggested that the best solutions are voting systems having a “voter-verifiable audit trail,” where a computerized voting system might print a paper ballot that can be read and verified by the voter.

In accordance to perform a steadfast election, concerns on what and how a voting system is implemented is always being considered. With regard to the study, I deem that conduction of this kind of research is significant to the public and to the assurance of trustworthiness of an election. In conformity with evolving technology, an electronic voting system is being manipulated to try out the reliability of security of adopting paperless electronic voting system. Testing and simulation of the said system is done to be able to examine its security assurance. I actually find this research complicated to perform. Findings showed that such system may be unreliable and recommendations of exploiting open process and other particular system is advised. The study is commendable and I would like to propose that to further elaborate the function of the study, I think a number of systems should be taken into consideration to become subjects of the study.

Open Standards, Open Formats, and Open Source
Davide Cerri and Alfonso Fuggetta
CEFRIEL - Politecnico di Milano
January 2007

The paper proposed some comments and reflections on the notion of “openness” and on how it relates to three important topics which are open standards, open formats, and open source. Often, these terms are considered equivalent and/or mutually implicated: “open source is the only way to enforce and exploit open standards”. This position is misleading, as it increases the confusion about this complex and extremely critical topic. The paper clarified the basic terms and concepts. This is instrumental to suggest a number of actions and practices aiming at promoting and defending openness in modern ICT products and services.

This paper concentrated on some of the issues and claims associated with open source. In particular, it will discuss the relationship among open source, open standards, open formats, and, in general, the protection of customers’ rights. Indeed, many consider open source as the most appropriate way to define and enforce open standards and open formats. In particular, the promotion of open standards and open formats is confused with the open source movement. Certainly, these issues are interrelated, but it is wrong to overlap them. For these reasons, the ultimate goal of the paper is to provide a coherent, even if preliminary, framework of concepts and proposals to promote the development of the market and to address customers’ needs and requests.


It has always been an arguable issue about openness, open source and its relevant concerns. We too have discussed and tackled these issues. I believe the impact of this kind of study is favorable. It has identified a number of definitions for the term “open standard”, based on the different practices in the market. Moreover, the paper contains some proposals to deal with the different issues and challenges related to the notions of openness, customers’ right, and market development. The study used some historical data in compilation of various definitions of open standard. It is an evaluation or overview of related subjects of open standard. This study is somewhat a descriptive research.
Back to top Go down
View user profile

Posts : 23
Points : 25
Join date : 2009-06-22
Age : 27

PostSubject: Assignment 1 (Published Scientific Papers)   Sun Jul 26, 2009 12:48 pm

---------------------------------------- 1 ----------------------------------------

SpamGuru: An Enterprise Anti-Spam Filtering System
Richard Segal, Jason Crawford, Jeff Kephart, Barry Leiba
IBM Thomas J. Watson Research Center
{rsegal, ccjason, kephart, barryleiba}@us.ibm.com

Spam volumes are now increasing. That is why spam techniques are being developed rapidly. The researchers believe that no anti-spam solution is the right answer and that the best approach is a complicated one. Anti-spam research was undertaken by the researchers and their colleagues and come up with SpamGuru, an anti-spam filtering system.

SpamGuru is an enterprise-class anti-spam filter. that combines several learning, tokenization, and user interface elements to enterprise-wide spam protection with high spam detection rates and low false-positive rates. There are three basic design principles underlie SpamGuru. (1) It simplifies administration. The costs of infrastructure and operation of spam in today’s corporate environment is staggering compounded by the time and effort exerted by the system administrators in blocking spam and preventing false positives. SpamGuru automates decisions and tasks thus reducing IT costs. (2) It is highly customizable and tunable by system administrators and individual users. Users can have the ability to customize e-mail filtering which improves user satisfaction and eliminate support issues created by a one-sizefits-
all solution. (3) SpamGuru provides a very low false positive rate that can be tuned to suit the administrator’s or user’s spam detection vs. false positive tradeoff. Its multiple classifier approach makes it robust against changes in spammer tactics.

The paper clearly stated about the subject – the SpamGuru, what it is for and how it works. Reading the title alone, you can determine already what the paper is all about and it did catch my attention. Readers like me who have experienced receiving spam messages will get interested in reading the entire paper. The paper composed only seven pages. The first paragraph which is the abstract was short but comprehensible enough. Illustrations were used showing how the subject works which is good since readers can easily understand what the paper trying to imply and what the subject is all about. However, I noticed that the paper focus mostly on the subject without thorough discussion about spam. I am not sure if they included how the study was being done.

Regarding the presentation and the format, the subject was well presented although there was some information that I think was missing and some sections in the paper compare with the standard format of a scientific paper. References and authors were cited. Overall, the idea was well organized. And, I was impressed by the SpamGuru. It was very interesting and I can really relate to it since I received some spam messages before in my electronic mail account. Although the message that was sent to me was not really a spam, it was a verification mail, yes, I received the mail but it was labeled as a spam.

[1] http://www.ceas.cc/papers-2004/126.pdf

---------------------------------------- 2 ----------------------------------------

Rooter: A Methodology for the Typical Unification
of Access Points and Redundancy

Jeremy Stribling, Daniel Aguayo and Maxwell Krohn

The researchers questioned the need for digital-to-analog converters. The research focuses not on whether symmetric encryption and expert systems are largely incompatible, but they are proposing new flexible symmetries (Rooter). Their evaluation method represents a valuable research contribution in and of itself. Their overall evaluation seeks to prove three hypotheses: (1) that they can do a whole lot to adjust a framework’s seek time. (2) that von Neumann machines no longer affect performance and finally (3) that the IBM PC Junior of yesteryear actually exhibits better energy than today’s hardware.

They implemented their scatter/gather I/O server in Simula-67, augmented with opportunistically pipelined extensions. Their experiments soon proved that automating their parallel 5.25” floppy drives was more effective than autogenerating them. In the paper, they motivated Rooter, an analysis of rasterization.

To begin with, I would like to say that the research was a complicated one. If I were not an IT student, reading the paper and understanding it would be so hard for me. The paper composed only four pages but based on my experience, first read of it is not enough. I had a hard time understanding the thought maybe because I was unfamiliar with some of the terms that were used and maybe because of the acronyms and the formulas that were also used.

Acronyms were used in some sections of the paper. I was trying to look for the complete term of the acronyms but I think it was not included in the paper. It was one of the reasons why I was not able to understand thoroughly what the paper was trying to imply. However, it was a good thing that the researchers included illustrations in the paper for clarity and also showed the schematic diagram that was used by the researchers’ methodology. The paper used the standard format in writing a scientific paper.

[1] http://pdos.csail.mit.edu/scigen/rooter.pdf

---------------------------------------- 3 ----------------------------------------

How Prevalent are Computer Viruses?
Jeffrey O. Kephart and Steve R. White
High Integrity Computing Laboratory
IBM Thomas J. Watson Research Center
P.O. Box 704, Yorktown Heights, NY 10598

Hundreds of computer viruses have been identified and their number is increasing rapidly. It was mentioned that the researchers seek to understand the extent of the computer virus problem in the world today and would like to predict what it will be like in the future. Surveys were conducted and collected statistics on virus incidents directly from a large chosen population in order to answer the question raised by the researchers. Each incident was recorded (at a minimum) where and when the incident occurred, what virus was involved, and how many machines were affected. The result showed that only 15% to 20% of the more than 700 viruses in their collection have ever been seen “in the wild” (the word “in the wild” was the term used in the paper) in the population. Even among those that have been seen, a small minority of them account for the majority of incidents. It was also showed that the total number of virus incidents per quarter is increasing and its increase is due to combination of two effects: (1) some viruses are becoming more prevalent, (2) and the number of different viruses observed “in the wild” is increasing.

Their statistics indicated the following steps to help control the problem. First is to make sure that users use anti-virus software. Second is to make sure that they know what viruses are and who to contact if they find one. Lastly, to make sure that the people they contact remove the reported infection (and others connected with it) quickly. I do agree with what was stated in the paper that as we understand more how computer viruses spread, we will be able to predict how risks will change in the upcoming years.

I would like to say that it was a well-written scientific paper. It was in the paper the researchers’ motivation for doing the experiment, the design of the experiment, the execution, and the meaning of the results. It was clear but not so concise. The approach that they used to investigate the issue was written in the paper. The paper composed of ten sections. Sections used were based on the standard format and the other necessary sections were included and I think the information was put into appropriate location.

Surveys and comparison were made by the researchers and so it needs mathematical equations and figures. Different charts and data sheets that includes the fraction of incidents of given size, fraction of infected computers involved in incidents, number of viruses, relative frequency of incidents and more were presented in the paper together with the interpretations. As what I have read and learned from my readings on writing a scientific paper, the information should be presented clearly and concisely. What I have noticed the most is that the paper was very detailed.

[1] http://vx.netlux.org/lib/static/vdat/epvirpre.htm

---------------------------------------- End ----------------------------------------
Back to top Go down
View user profile

Posts : 34
Points : 48
Join date : 2009-06-24

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Fri Jul 31, 2009 6:50 pm

Research Paper No. 1948
Empirical Analysis of Indirect Network Effects in the
Market for Personal Digital Assistants
Harikesh Nair
Pradeep Chintagunta
Jean-Pierre Dubé
October 2003


This study presents a framework to measure empirically the size of indirect network effects in hightechnology markets with competing incompatible technology standards. These indirect network effects arise due to inter-dependence in demand for hardware and compatible software. By modeling the joint determination of hardware sales and software availability in the market, we are able to describe the nature of demand inter-dependence and to measure the size of the indirect network effects. We apply the model to price and sales data from the industry for Personal Digital Assistants (PDAs) along with the availability of software titles compatible with each PDA hardware standard. Our empirical results indicate significant indirect network effects. By July 2002, the network effect explains roughly 22% of the logodds
ratio of the sales of all Palm O/S compatible PDA-s to Microsoft O/S compatible PDA-s, where the remaining 78% reflects price and model features. We also use our model estimates to study the growth of the installed bases of Palm and Microsoft PDA hardware, with and without the availability of compatible third party software. We find that lack of third party software negatively impacts the evolution of the installed hardware bases of both formats. These results suggest PDA hardware firms would benefit from investing resources in increasing the provision of software for their products. We then compare the benefits of investments in software with investments in the quality of hardware technology. This exercise helps disentangle the potential for incremental hardware sales due to hardware quality improvement from that of positive feedback due to market software provision.


Framework to measure empirically the size of indirect network effects in hightechnology markets with competing incompatible technology standards is a bit vague.

It includes potential research extensions which will further bradens ones knowledge about the subject.
"A potential extension of the paper would be to incorporate the quality of software, rather
than just the availability, into the model framework. Though straightforward, estimating such a model would require considerably more data on the software side."


Research Paper No. 1958
Defining the Minimum Winning Game in
High-Technology Ventures
Robert A. Burgelman
Robert E. Siegel
December 2006


Based on a combination of exploratory field research and executive experience, the study proposes that defining the “Minimum Winning Game” (WMG) is a difficult yet critical responsibility of top management to keep a high-technology venture focused and able to learn from its ongoing efforts in the face of rapidly evolving technological and market uncertainties. We also propose that achieving the MWG requires the intelligent balancing of three “key drivers of strategic action”: technology development, product development, and strategy development. Finally, we propose that instilling the discipline necessary to define the MWG and balance the drivers of strategic action is facilitated by the use of a strategy-making process informed by key data gathering and analysis tools such as the market requirement document (MRD) and the product requirement document (PRD).


While highly cognizant of the dangers of bureaucracy, I propose that strategic discipline in high-technology ventures is more likely to be achieved if top management requires the definition of the Minimum Winning Game to be data driven, which is facilitated by the use of a disciplined strategic planning process and product development tools such as the MRD and PRD.

Three “key drivers of strategic action”: technology development, product development,
and strategy development were clearly stated.


Research Paper No. 1994
Organizational Evolution with Fuzzy Technological Boundaries: Tape Drive Producers in the World Market, 1951-1998
Glenn R. Carroll
Mi Feng
Gael Le Mens
David G. McKendrick
August 2008

This study shows how tape drive producers respond to the almost continuous emergence of new drive formats across the technology’s history. The analysis characterizes the technological formats of tape drives according to their degree of contrast (distinctiveness and visibility) from other formats. High contrast implies strong boundaries, which prevent information leakage and appear to provide solid strategic footing because they are not easy to adopt and replicate; yet formats with strong boundaries may appear risky to potential customers who may prefer formats more readily substitutable. We develop and test arguments about how different types of tape drive manufacturers add and drop the production of formats as a function of the contrast of formats. A key distinction we make is between single-format producers whose format has high contrast (yielding a clear firm level identity) and multi-format producers whose various formats blur the firm’s identity when they are high in contrast. In the empirical analysis, we find that single-format firms producing formats with high contrast experience a lower rate of mortality, while average high contrast in the technology portfolio of multi-format producers lowers their survival chances. We also find that single-format firms with technology formats characterized by high levels of contrast are: (1) more likely to add newly emerging formats and (2) less likely to drop existing formats. By contrast, multi-format manufacturers show mixed patterns.


"Overall, the findings from the empirical analysis support the proposal that higher contrast is associated with relatively less permeable technological boundaries: formats with low contrast are more likely to be added to or dropped from the portfolio of tape-drive producers." It is not clearly stated why formats with low contrast are more likely to be added to or dropped from the portfolio.

Test arguments about how different types of tape drive manufacturers add and drop the production of formats as a function of the contrast of formats were clearly defined.
Back to top Go down
View user profile http://www.mylair.tk

Posts : 47
Points : 52
Join date : 2009-06-24
Age : 27
Location : Heaven

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Fri Jul 31, 2009 7:26 pm

Research Paper No. 1843
The “Strategy and Action In The Information
Processing Industry Course” (S370) At Stanford
Business School: Themes, Conceptual Frameworks,
Related Tools
Robert A. Burgelman
Andrew S. Grove
January 2004

This paper provides an overview of the key themes, conceptual frameworks and related
tools that are used to examine cases and industry notes in the course “Strategy and Action
in the Information Processing Industry” an applied industry analysis course that we have
co-taught at the Stanford University Graduate School of Business since 1991. Through at
least two business cycles and an Internet boom and bust the course has explored the
impact of relentless technological change, major deregulation, and increasing
globalization of competition on the structure and evolution of the information technology


changing industry dynamics cause major threats as well as oprtunities for incumbent firms.
- Major threats were not clearly discussed and explained, as well as the oportunities for incumbent firms. As a mere reader of the paper, I just cant understand why fundamentally changing dynamics brings threat to the company and how.


The scope of the study is clear as well as the period (1989-2003). This is important because some factors yesterday might be different today. The impact of relentless technological change, major deregulation, and increasing globalization of competition on the structure and evolution of the information technology industry were very well sited.


Research Paper No. 1876(R)
The Effect of Market Structure on Cellular
Technology Adoption and Pricing
Katja Seim
V. Brian Viard
May 2006

The study analyze the effect of entry on the technology adoption and calling plan choices of
incumbent cellular firms. Focusing on the time period from 1996, when incumbents
enjoyed a duopoly market, to 1998, when they faced increased competition from personal
communications services (PCS) firms, we relate the adoption of digital technology and
change in the breadth of calling plans to the amount of PCS entry experienced in different
markets. Variation in geographic features contributes to the difficulty of building a
sufficiently large wireless infrastructure network, providing effective instruments for
endogenous entry decisions. Our results indicate that incumbents are more likely to
upgrade their technology from analog to digital in markets with more entry. Consistent
with increased digital technology adoption in more competitive markets, we find that
incumbents in the process of digital upgrading also phase out a larger number of analog
calling plans and introduce a larger number of digital calling plans in less concentrated


The scope of the study is too large (cellular firms, building infrastructure on different geographical location, prices) that will result to a higher percentage of inaccuracy.
The successful diffusion of a new innovation depends not only on the time of
introduction, but also on subsequent pricing of the technology. Prices were not objectively considered since prices vary from country to country, depending on the state of their economy.


The study includes how firms price a newly introduced service and, specifically, how competition affects firms’ choices of plan breadth in the wake of technological innovation. Thus making readers understand more about their paper. The reason why some firms elliminate analog plans and switch to digital plans were clearly sited.


Research Paper No. 1933 (R)
Stable Outcomes of Generic Games in Extensive Form
Srihari Govindan
Robert Wilson
May 2007

The study applies Mertens' definition of stability for a game in strategic form to a game
in extensive form with perfect recall. It prove that if payos are generic then
the outcomes of stable sets of equilibria defined via homological essentiality by
Mertens coincide with those defined via homotopic essentiality. This implies
that for such games various definitions of stability in terms of perturbations of
players' strategies as in Mertens or best-reply correspondences as in Govindan
and Wilson yield the same outcomes. A corollary yields a computational test
that usually succeed to identify the stable outcomes of such a game.


Game in strategic form is not clearly differentiated to a game in extensive from. Since not all readers understand some terms used in gaming.


Moreover, if payos are generic then stable outcomes are the same in both representations,
only when payos are nongeneric does an extensive-form game with perfect recall require a
deeper analysis using the full apparatus of homology theory applied to the strategic form.
Back to top Go down
View user profile http://freedailyarticles.com/
charmaine anne quadizar

Posts : 33
Points : 40
Join date : 2009-06-23
Age : 29
Location : Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Tue Aug 18, 2009 11:36 pm

Assignment 1
SpamGuru: An Enterprise Anti-Spam Filtering System
Richard Segal, Jason Crawford, Jeff Kephart, Barry Leiba
IBM Thomas J. Watson Research Center


Spam-reduction techniques have developed rapidly over the last few years, as spam volumes have increased. The spam problem requires a multi-faceted solution that combines a broad array of filtering techniques with various infrastructural changes, changes in financial incentives for spammers, legal approaches, and more. SpamGuru, is a collaborative anti-spam filter that combines several learning, tokenization, and user interface elements to provide enterprise-wide spam protection with high spam detection rates and low false-positive rates. In this paper the system architecture of the anti-spam architecture was presented. We have presented SpamGuru, an anti-spam filtering system for enterprises that is based on three important design principles. First, SpamGuru relieves the burden of anti-spam administration by automating several tasks such as maintaining white- and black-lists, updating filters automatically in response to user votes, etc. Second, the SpamGuru architecture supports easy, flexible configuration. This is important, because one size does not fit all, and because rapid changes in spammer techniques can necessitate changes in configurations or tuning parameters. SpamGuru gives individual users control of their level of filtering and provides personalized filtering that is usefully combined with global filters based on collaborative voting among users. The filter archive handles users concerns of false positives without the need to call support. Finally, by combining multiple disparate classifiers, we have shown that SpamGuru can achieve excellent discrimination between spam and legitimate mail, and can offer a tunable tradeoff between spam detection rates and false positives, with excellent spam detection even at very low false positive rates.

This paper is somewhat technical. There are steps/methodology in presenting the study that are different from other kinds of research paper. Abstract should not be missing in a paper because in this part you will be able to know the whole thing about the research. What about the research? Why the research has been done? And it shows the significance of the study. From the research papers that I have read, this is the only research study that had introduction. As what I have said the study is more on technical, a system overview was presented. For me, in a technical type of research paper system overview should be part of the study because not the readers know about the whole thing in your study. Like other research paper method, analysis, results and conclusion were presented. This study also includes recommendation of the study and it also has its references throughout the study.

Immune Activation and Autoantibodies in Humans with
Long-Term Inhalation Exposure to Formaldehyde
By: Jack D. Thrasher, Ph.D.
Thrasher & Associates
Northridge, California

Alan Broughton, M.D., Ph.D.
Antibody Assay Laboratories
Santa, California

Roberta Madison, Dr.P.H.
Department of Health Sciences
California State University
Northridge, California
Published in: Archives of Environmental Health, Vol. 45, pp. 217-223, 1990


In this paper, four groups of patients with long-term inhalation exposure to formaldehyde (HCHO) were compared with controls that had short-term periodic exposure to HCHO. The following were determined for all groups: total white cell, lymphocyte, and T cell counts; T-helper/T-suppressor ratios; total Ta1+, IL2+, and B cell counts; antibodies to formaldehyde-human serum albumin conjugate and autoantibodies. When compared to the controls, the patients had significantly higher antibody titers to HCHO-HSA. In addition, significant increases in Ta1+, Il2+, and B cells and autoantibodies were observed. Immune activation, autoantibodes, and HCHO-HSA antibodies are associated with long-term formaldehyde inhalation. Presently, autoimmune disorders have been diagnosed clinically in these patients.

This second paper that I have read was quite different on the other papers. This paper consists of several tables and statistics showing the comparisons of the different groups involve in the study. After each table, there are discussion and interpretation about the data in the table. The paper also defines some important terms that are not familiar. This part of the paper was not found in the other paper that I have read. For me, this is very important because not all your readers familiar of the technical words that you are using. After that materials and methods were presented. Statistical analysis was also presented. In the study the student group was used as controls for all statistical tests. Each of the four patient groups were compared with the controls for the following: (a) Z tests (b) two-tailed t tests. Results were given showed in tables. After the presentation of all result and analysis conclusion was also given at the end of the study.

Short and Long Term Functional Status Outcomes of CPAP Treatment in Obstructive Sleep Apnea
Teresita Celestina S. Fuentes, M.D.
Office of Education and Research
East Avenue, Quezon City, Philippines 1100
Vol. No. , 2009


Since its introduction in 1981, Continuous positive airway pressure (CPAP) has become the standard treatment for OSA. Several studies have focused on short term clinical end points; it is not known whether these early benefits of CPAP therapy are maintained over a longer period of time. The aims of this study were to determine the short and long-term impact of CPAP treatment among moderate-severe OSA patients in improving sleepiness-related functional impairment. The study provided a strong evidence that the use of CPAP resulted in short and long term improvements in sleepiness- related functional impairments.

The first time I saw the paper, I think it is not complete but as I have read the study, it was very short but the whole thing that it is very important in a research paper was there. Statements were direct to the point. First, the abstract of the study was given then the methodology, next results were laid and lastly the researcher concluded about the outcome of the study.
Back to top Go down
View user profile http://www.charmisme.blogspot.com
ermilyn anne magaway

Posts : 22
Points : 32
Join date : 2009-06-19
Age : 28
Location : Sitio Bulakan Brgy. Aquino Agdao Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Mon Aug 24, 2009 5:10 pm

Mastering carbon management: Balancing trade-offs to optimize supply chain efficiencies

Karen Butner
Dietmar Geuder
Jeffrey Hittner


As the planet heats up, so do regulatory mandates to reduce greenhouse gas emissions worldwide. Much of the opportunity to address CO2 emissions rests on the supply chain, compelling companies to look for new approaches to managing carbon effectively — from sourcing and production, to distribution and product afterlife. The trade-offs in the supply chain are no longer just about cost, service and quality — but cost, service, quality and carbon. By incorporating carbon reduction into their overall SCM strategy, companies can help reduce their environmental emissions footprint, strengthen their brand image and develop competitive advantage. The volume of global trade has more than doubled in the last decade. This phenomenon has been facilitated by relatively cheap energy, with low attention given to the impact on climate change.
Going forward, firms should expect to be charged for their CO2 emissions. And most certainly, this charge will force a change in the way companies run their supply chains. Common practices of the last century – like long-distance airfreight, small batch size, just-in-time concepts and energy-intensive production in countries with low environmental standards – will likely go by the economic and political wayside. Reducing the supply chain’s carbon footprint will become an inescapable obligation.


This paper aims to take into account traditional concerns about quality, service and cost, a comprehensive carbon-management strategy can build a base for sustaining growth – enabling companies to maintain competitive differentiation, strengthen their brand image and be better positioned to enter new markets. I think the part which was stated in the paper saying “How can IBM help?” says everything what they want to address to people. IBM need to have a Carbon Management - IBM Energy And Environment Framework that helps organizations visualize the issues of the entire enterprise by creating a strategic platform for addressing the impact on the environment,
, Carbon Trade-Off Modeler that allows for the development and analysis of alternative supply chain policies, options, and network configurations based on trade-offs between carbon emissions, cost, quality and service level,
, Component Business Modeling (CBM) tools allows organizations to identify opportunities for improvement and innovation by regrouping activities into modular and reusable components.



]Rooter: A Methodology for the Typical Unification of Access Points and Redundancy

Jeremy Stribling, Daniel Aguayo and Maxwell Krohn


Many physicists would agree that, had it not been for congestion control, the evaluation of web browsers might never have occurred. In fact, few hackers worldwide would disagree with the essential unification of voice-over-IP and public private key pair. In order to solve this riddle, we confirm that SMPs can be made stochastic, cacheable, and interposable.


Here we motivated Rooter, an analysis of rasterization. We leave out a more thorough discussion due to resource constraints. Along these same lines, the characteristics of our heuristic, in relation to those of more little-known applications, are clearly more unfortunate. Next, our algorithm has set a precedent for Markov models, and we that expect theorists will harness Rooter for years to come. Clearly, our vision for the future of programming languages certainly includes our algorithm.


As I’ve read this scientific paper which was submitted during WMSCI2005, it has been proven during their experiment that the expert can do a whole lot to adjust a framework’s seek time, that von Neumann machines no longer affect performance and that the IBM PC Junior of yesteryear actually exhibits better energy that today’s hardware. Though it’s very hard to understand because of some technical words and me as a beginner, had hard time understanding this. But since it was been published and accepted, I think it matters.


IBM Introduces New Software to Help Clients More Effectively Manage Cross Platform Virtual Servers, Reduce Data Center Costs

Rick Bause
IBM Media Relations

IBM announced new systems software for managing virtualized servers, designed to help clients plan, build and maintain data centers while reducing costs. IBM is also helping clients protect their long-term investments in Power Systems™ by announcing an upgraded path to its next-generation servers that will include POWER7 microprocessors.
The new system -- IBM Systems Director VMControl, gives clients a tool to manage heterogeneous virtual servers. It allows users to discover, display, monitor and locate virtual resources; create and manage virtual servers; and deploy and manage workloads with a common interface across IBM System z (®) mainframes, System x (®) x86-based servers, BladeCenter(®), and Power Systems AIX(®), Linux(®) and i platforms.
Management of virtualized servers is a key priority for businesses and can help make them more efficient, orchestrated, and effective. VMControl is part of the IBM Systems Director family of software for the management of IBM servers, storage and networking, and provides automatic discovery, as well as monitoring and updates for physical and virtual resources.


While reading, I can really say that it really aim to help customers protect long-term systems Investments. It also provides lifecycle management of virtual servers, with its ability to create, modify and delete virtualized resources, as well as move them to other locations. Systems Director can help businesses maintain the performance and availability of their servers and simplify operations in a dynamic infrastructure. Indeed, they are addressing clients the need for virtualization management. The new system helps clients reduce total cost of ownership and provides them with the tools needed to both better manage and get more business value out of their heterogeneous virtual computing environments.

Back to top Go down
View user profile
mariechelle alcoriza

Posts : 36
Points : 50
Join date : 2009-06-20
Age : 28
Location : Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Mon Aug 31, 2009 10:33 pm

A Biologically Inspired Immune System for Computers
Jeffrey O. Kephart
High Integrity Computing Laboratory
IBM Thomas J. Watson Research Center
P.O. Box 704, Yorktown Heights, NY 10598


The study primarily focuses on the computer viruses that are in fact being thought of as a serious problem in the industry nowadays. According to the study, two alarming trends are likely to make computer viruses a much greater threat. The first one is the speed at which new viruses are being written is high and accelerating. Imagine that new computer viruses are being created and being spread almost every after minute! The second is the trend towards the increasing the interconnectivity and interoperability among computers which would result to the fast spreading of the computer viruses.

Then, the IBM conducted a study and was able to create an immune system for computers. The primary features of the immune system are the following:
1. Recognition of known intruders.
2. Elimination/neutralization of intruders.
3. Ability to learn about previously unknown intruders.
o Determine that the intruder doesn't belong.
o Figure out how to recognize it.
o Remember how to recognize it.
4. Use of selective proliferation and self-replication for quick recognition and response.
Their system develops antibodies to the viruses and worms that were once encountered by the computer system, the computer system remembers them and would respond quicker if those viruses and worms will again attack the system.

With respect to the immune system of the computers, the system would not recognize a virus via exact match or exact information or data with regards to that virus but it is detected via an exact or fuzzy match to a relatively short sequence of bytes occurring in the virus (termed as the signature).

How do they eliminate the intruders? If the computer immune system were to find an exact or fuzzy match to a signature for a known virus, it could take the analogous step of erasing or otherwise inactivating the executable file containing the virus.

Their system also has the ability of learning about the previously unknown intruders. First, the process by which the proposed computer immune system establishes whether new software contains a virus has several stages. Integrity monitors, which use checksums to check for any changes to programs and data files, have a notion of ``self'' that is as restrictive as that of the vertebrate immune system: any differences between the original and current versions of any file are flagged, as are any new programs. Then, Mechanisms that employ the complementary strategy of ``know thine enemy'' are also brought into play. Among these are activity monitors, which have a sense of what dynamic behaviors are typical of viruses, and various heuristics, which examine the static nature of any modifications that have occurred to see if they have a viral flavor.

If one of the virus-detection heuristics is triggered, the immune system runs the scanner to determine whether the anomaly can be attributed to a known virus. If so, the virus is located and removed in the usual way. If the anomaly can not be attributed to a known virus, either the generic virus-detection heuristics yielded a false alarm, or a previously unknown virus is at large in the system.
At this point, the computer immune system tries to lure any virus that might be present in the system to infect a diverse suite of ``decoy'' programs. A decoy program's sole purpose in life is to become infected. The algorithms extract from a set of infected decoys information on the attachment pattern of the virus, along with byte sequences that remain constant across all of the captured samples of the virus. Next, the signature extractor must select a virus signature from among the byte sequences produced by the attachment derivation step. The signature must be well-chosen, such that it avoids both false negatives and false positives. In other words, the signature must be found in each instance of the virus, and it must be very unlikely to be found in uninfected programs.
With regards to the usage of self proliferation and self-replication for the quick recognition of the viruses and worms, their system also has the ability that when a computer discovers that it is infected, it can send a signal to neighboring machines. The signal conveys to the recipient the fact that the transmitter was infected, plus any signature or repair information that might be of use in detecting and eradicating the virus. If the recipient finds that it is infected, it sends the signal to its neighbors, and so on. If the recipient is not infected, it does not pass along the signal, but at least it has received the database updates -- effectively immunizing it against that virus.

Their system develops antibodies to the viruses and worms that were once encountered by the computer system, the computer system remembers them and would respond quicker if those viruses and worms will again attack the system.

The research is not bias. It is interesting that the IBM develops an immune system similar to the immune system of human beings. The research is very informative in a sense that it shows how to avoid and even fight the computer viruses by providing an immune system for computers.

Charity Begins at… your Mail Program

Peter G. Capek, Barry Leiba, Mark N. Wegman
IBM Thomas J. Watson Research Center,
Hawthorne, NY 10532
{capek, barryleiba, wegman}@us.ibm.com

Scott E. Fahlman
Carnegie Mellon University
Computer Science Department
Pittsburgh, PA 15213


There are many methods introduced in the industry to minimize the spam mails or junk mails. One of which is asking the sender of an e-mail to pay the recipient just to prove that the sender is not a spammer. And this study introduces another technique, “charity seals” which means that the money you spent will be donated to the charity, and most legal users would not mind dong it and the money they will be spending is for a good cause.

The primary interest of the study is more on the e-mail involving the users which are not familiar or do not even know each other. Nowadays, the information about the sender is usually not confirmed or not verifiable at all and these would result to the spam problems.

The study also introduced number of approaches of solving the problem of spam e-mails and comparing it to the “charity seals”. One of those is the one-time-use or “passworded”-e-mail address idea. Also, there are many authors that promote “sender pays the recipient” schemes that suggest the exchange of money. The idea in here is that, if the sender is caught as a spammer, the recipient can collect the money; otherwise, the recipient will return the money to the sender. The thought in here is just to determine whether the sender is a spammer or not. Another approach is using CAPTCHA (“Completely Automated Public Turing test for telling Computers and Humans Apart”) scheme. The primary intention of this scheme is the idea that any human can answer the CAPTCHA easily and would be difficult or impossible for the computers.

Also, included in the paper that Fahlman and Wegman have proposed another approach, it is the “sender pays charity” in which the idea of the “charity seals” have taken. The idea is similar to the “Christmas seals” in which it has been used by the United States for quite some time now. Since Christmas is the only time that people will send large amounts of conventional mail. The idea is, these seals are distributed by a charity (using paper mails) with a solicitation for a contribution. The sender will only use the seals if he/ she will make a contribution to the charity that issued the seals.

The idea of the “charity seals” is the same as with the one stated above, the exchange of money and the electronic version of the seals. The only difference is that it tightens the combination; the seals are not reusable and not forgeable.

How it is done? At first the sender will choose what specific charity which he will donate the money. Then, an agency collects donations on behalf of the charities. The agency operates an Internet service which supplies to the donors a custom-created seal which the sender can include in his e-mail. The seal is essentially a document containing at least the recipient’s identity, an amount of money donated and a unique number – perhaps a time stamp – and the sender’s identity. It is digitally signed by the agency, and is proof that the sender has made a qualifying donation to a participating charity. Effectively, the agency keeps an account for each contributor and debits it whenever a seal is issued.

If the task is giving money, the system requires a connection with some banking system. Typically this would be done using a credit card. Senders probably do not want to give credit cards out to everyone they send mail to, and most recipients are not set up to take credit cards. Financial institutions do, however, have the idea of escrow accounts. The notion of escrow is that one person puts money into an escrow that is trusted by both parties, and they agree to terms under which the money would be released to one or the other party. This makes it much easier to handle payments when one party could disappear.

One way to achieve the advantages of a central server when the task is delivery of money is to have the sender establish an escrow account for each recipient. If the recipient votes the mail as spam within some pre-established time limit, the money is paid (that is, the task is performed). Otherwise it is returned to the sender. If the time limit expires, before the recipient reads the mail the system has the options of assuming that if the sender was willing to risk the money it is probably legitimate mail, or of not delivering the mail to the actual person until money is placed back in the escrow account.

There are many approaches in addressing the problem of spam mails and one of which is the “charity seals”. This would make the cost of mail low to those who are legal users and making it expensive to the spammers.


It is true that spam mails do really exists and does contain problems. There are many people who have been victims of the spammers. I think, the awareness of the people about this problem should be the number one that should be addressed into. With regards to the study, it’s a good idea that they have introduced this kind of approach since giving money to charities is a good cause though you spend a little money. I think, they’ve introduced the idea but with respect to the implementation of the said methodology are still missing.
Back to top Go down
View user profile
mariechelle alcoriza

Posts : 36
Points : 50
Join date : 2009-06-20
Age : 28
Location : Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Mon Aug 31, 2009 10:33 pm

part 2

Why Hackers Do What They Do: Understanding Motivation and
Effort in Free/Open Source Software Projects1

Karim R. Lakhani* and Robert G Wolf **
*MIT Sloan School of Management | The Boston Consulting Group
**The Boston Consulting Group <wolf.bob@bcg.com>
September 2003

The research paper is all about the stating the outcomes of the study on what are the motivations behind the individuals for their continuous contribution to the Free/Open Source Software. In other words, what are the factors or what are the driving force to the developers to give their much time and effort in developing Free/Open Source Software.

What are the motivations of the F/OSS developers?

In the paper, they have stated and reviewed the two types of motivations. These are:

Intrinsic Motivation
From the word intrinsic this means a motivation that is found deep within a person. It stated there that when a person is intrinsically motivated, he/she is moved to act for the fun or challenge entailed rather than because of external prods, pressures, or rewards. And again, according to Lindenberg 2001, intrinsic motivation is separated into two distinct components: 1. enjoyment-based intrinsic motivation and 2. obligation/ community-based intrinsic motivation.

According to Deci and Ryan 1985, the idea behind of the intrinsic motivation is having fun or enjoying oneself when taking part in an activity. Csikszentmihalyi (1975) proposed a state of “flow”, in which enjoyment is maximized, characterized by intense and focused concentration; a merging of action and awareness; confidence in one’s ability; and the enjoyment of the activity itself regardless of the outcome.
On the other hand, the obligation/ community-based intrinsic motivation, according to LindenBerg (2001), he stated that individuals may be socialized into acting appropriately in a manner consistent with the norms of a group.

Extrinsic Motivation
The idea behind the extrinsic motivation is gaining rewards (whether direct or indirect) for doing a task or an activity. In other words, the developer is paid and most probably he/ she are given incentives for doing such activity.

How does the study done?
The researchers did a sample web-survey. The samples that are to be included in the survey was from the individuals listed as official developers on F/OSS projects that is found in SourceForge.net, the F/OSS community web site. The researchers sent personalized e-mails to each individual inviting them to participate in the survey. The researchers also provided or assigned a random personal identification number for accessing the survey. The first part of the survey was done from October 10-31,2001 generating 526 responses giving a response rate of 34.3% . The second survey was conducted on April 28, 2002 that generated 173 responses out of the 573 mails sent, giving a response rate of 30.0 %

The results:
According to their study, they’ve found out that 87 % of all the respondents received no direct payments, 55 % of them contributed code during their work time. The combination of those who received direct payments and those supervisors knew their work on the project created do consists of approximately 405 of the sample.

On the number of hours per week spent on a project, they’ve found out that the respondents spent an average of 14.1 hours on all their F/OSS projects and 7.5 hours on the focal project.

On the personal creativity and flow of the respondents, based on the research done, the respondents noted a very high sense of personal creativity in the focal projects. More than 61% of the respondents said that their participation in the focal F/OSS project was their most creative experience.

Motivations to contribute
With respect to the results of the said survey, the top single reason to contribute to projects is based on enjoyment-related intrinsic motivation: “Project code is intellectually stimulating to write” which gains 44.9 % of all the respondents. Improving programming skills, an extrinsic motivation related to human capital improvement, was a close second, with 41.8% of participants saying it was an important motivator. Approximately 20% of the sample indicated that working with the project team was also a motivation for their contribution.
Paid contributors are strongly motivated by work-related user need (56%) and value professional status (22.8%) more than volunteers. On the other hand, volunteers are more likely to participate because they are trying to improve their skills (45.8%) or need the software for non-work purposes (37%).

To end, the study shows what are the driving force of the developers to contribute to the F/OSS projects and what is good in here is that, whether it is intrinsic or extrinsic motivation, there are many people who contributes to the fast growing and success of the F/OSS community.

The research paper really interests me that it shows the motivations that affect the developers to contribute to the continuing success of the F/OSS community. The research is very informative, complete in a sense that it gives the information on who are the respondents, the scope of the research as well the results and their conclusion.
Back to top Go down
View user profile
John Deo Luengo

Posts : 20
Points : 22
Join date : 2009-06-20
Age : 28
Location : Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Thu Sep 03, 2009 1:22 pm

Cog-Learn: An e-Learning Pattern Language for Web-based Learning Design
Authors: Junia Coutinho Anacleto, Americo Talarico Neto, and Vania Paula de Almeida Neris, Federal University of Sao Carlos, Brazil
Date: August 4, 2009


Designing Web-based content for e-learning is a difficult task for novice teachers who lack experience in interaction and learning design for the electronic environment. The results are poorly designed courses and learning contents—for instance, text documents with too much information, which hinder the students' learning. This research, supported by TIDIA-Ae project from FAPESP (process 03/08276-3), aims at designing learning material for Web-based e-learning and considers the different characteristics and knowledge of the multidisciplinary group that interact in such a project.

The authors synthesize the cognitive science's proposals, expressed here as a set of cognitive strategies adopted by Liebman, some of them from Ausubel, and some concepts used during interaction projects on Web systems such as universal design, participative design, and accessibility. They have documented those practices in patterns to support the design of the learning material. Considering such patterns, they propose to generate a common vocabulary among the participants of the multidisciplinary group that are responsible for designing the learning contents for e-learning (such as teachers, authors, educators, interface designers, software engineers, and Web designers), separating common qualities of existent designs, identifying successful solutions, and presenting the relevance of such solutions to help teachers better organize the content and thus benefit the students who are going to use it.

Here "teacher" is the professional responsible for designing the e-learning material, while "student" means the user that will interact with the developed Web interface published as learning content. This article is divided as follows. The first section briefly presents the cognitive strategies theory and the group that they used in the case studies. Second, they present the patterns and pattern language concepts. Third, they show the methodology used to conduct the case studies, including the framework that guided the usability evaluations. Fourth, they show the results from the case studies. Fifth, they present the e-learning pattern language identified, its details, and some potentiality and restrictions. Finally, we introduce a pattern-based tool to support the design of learning material, and end with some conclusions.


This research is actually has a nice topic since most of the school or universities today have what we called e-learning or online learning wherein teaching materials are uploaded online. This research would really help those teachers, professors and instructors not too familiar with this new form of teaching. With regards to the style used in this research, it is in APA format. The abstract of this research is one paragraph which clearly explains the thought of the research. In this research, the authors use case studies to find results. They also present the Cognitor, a computer-based tool, and more specifically, a pattern-based editor that incorporates the Cog-Learn pattern language to support teachers in their task of designing learning material that promotes active learning, reducing knowledge-acquisition complexity.


Cloud Computing
Greg Boss, Padma Malladi, Dennis Quan, Linda Legregni,
Harold Hall,
Management Contact: Dennis Quan
Organization: High Performance On Demand Solutions (HiPODS)
8 October 2007


Innovation is necessary to ride the inevitable tide of change. Indeed, the success of the transformation of IBM to an On Demand Business depends on driving the right balance of productivity, collaboration, and innovation to achieve sustained, organic top line growth — and
bottom line profitability. Enterprises strive to reduce computing costs. Many start by consolidating their IT operations and later introducing virtualization technologies. Cloud computing takes these steps to a new level and allows an organization to further reduce costs through improved utilization, reduced administration and infrastructure costs, and faster deployment cycles. The cloud is a next generation platform that provides dynamic resource pools, virtualization, and high availability. Cloud computing describes both a platform and a type of application. A cloud computing platform dynamically provisions, configures, reconfigures, and deprovisions servers as needed. Cloud applications are applications that are extended to be accessible through the Internet. These cloud applications use large data centers and powerful servers that host Web applications and Web services. Cloud computing infrastructure accelerates and fosters the adoption of innovations. Enterprises are increasingly making innovation their highest priority. They realize they need to seek new ideas and unlock new sources of value. Driven by the pressure to cut costs and grow— simultaneously—they realize that it’s not possible to succeed simply by doing the same things
better. They know they have to do new things that produce better results. Cloud computing enables innovation. It alleviates the need of innovators to find resources to develop, test, and make their innovations available to the user community. Innovators are free to
focus on the innovation rather than the logistics of finding and managing resources that enable the innovation. Cloud computing helps leverage innovation as early as possible to deliver business value to IBM and its customers. Fostering innovation requires unprecedented flexibility and responsiveness. The enterprise should provide an ecosystem where innovators are not hindered by excessive processes, rules, and resource constraints. In this context, a cloud computing service is a necessity. It comprises an automated framework that can deliver standardized services quickly and cheaply. Cloud computing infrastructure allows enterprises to achieve more efficient use of their IT hardware and software investments. Cloud computing increases profitability by improving resource utilization. Pooling resources into large clouds drives down costs and increases utilization by delivering resources only for as long as those resources are needed. Cloud computing allows individuals, teams, and organizations to streamline procurement processes and eliminate the need to duplicate certain computer administrative skills related to setup, configuration, and support. This paper introduces the value of implementing cloud computing. The paper defines clouds, explains the business benefits of cloud computing, and outlines cloud architecture and its major components. Readers will discover how a business can use cloud computing to foster innovation and reduce IT costs.


The research paper for IBM clearly explains the concept of cloud computing. It describes cloud computing, a computing platform for the next generation of the Internet. The paper defines clouds, explains the business benefits of cloud computing, and outlines cloud architecture and its major components. Readers will discover how a business can use cloud computing to foster innovation and reduce IT costs. IBM’s implementation of cloud
computing is described.


Comparing Java and .NET Security: Lessons Learned and Missed
Nathanael Paul David Evans
University of Virginia
Department of Computer Science

Java and .NET are both platforms for executing untrusted programs with security restrictions. Although they share similar goals and their designs are similar in most respects, there appear to be significant differences in the likelihood of security vulnerabilities in the two platforms.
By contrast, no security vulnerabilities in the .NET virtual machine platform have been reported to date. The most widely publicized security issue in .NET was W32.Donut, a virus that took control of the executable before the .NET runtime had control [46]. Since the vulnerability occurs before the .NET runtime takes control, we consider this a problem with the way the operating system transfers control to .NET, not with the .NET platform. Eight other security issues that have been identified in the .NET are listed in Microsoft’s Knowledge Base [12] and the CVE database [27], but none of them are platform security vulnerabilities by the standard we use in this paper. Appendix A explains these issues and why we do not count them. Java and .NET have similar security goals and mechanisms. .NET’s design benefited from past experience with Java. Examples of this cleaner design include the MSIL instruction set, code access security evidences, and the policy configuration. .NET has been able to shield the developer from some of the underlying complexity through their new architecture. Where Java evolved from an initial platform with limited security capabilities, .NET incorporated more security capability into its original design. With age and new features, much of the legacy code of Java still remains for backwards compatibility including the possibility of a null SecurityManager, and the absolute trust of classes on the bootclasspath. Hence, in several areas .NET has security advantages over Java because of its simpler and cleaner design. Most of the lessons to learn from Java’s vulnerabilities echo Saltzer and Schroeder’s classic principles, especially economy of mechanism, least privilege and fail-safe defaults. Of course, Java’s designers were aware of these principles, even though in hindsight it seems clear there were occasions where they could (and should) have been followed more closely than they were. Some areas of design present conflicts
between security and other design goals including fail-safe defaults vs. usability and least privilege vs. usability and complexity. For example, the initial stack walk introduced in Java has evolved to a more complex stack walk in both architectures to enable developers limit privileges. In addition, both platforms default policies could be more restrictive to improve security, but restrictive policies hinder the execution of programs. .NET’s use of multi-level policies with multiple principals provides another example of showing the principles of least privilege and fail-safe defaults in contention with usability and complexity. Several of the specific complexities that proved to be problematic in Java have been avoided in the .NET design, although .NET introduced new complexities of its own. Despite .NET’s design certainly not being perfect, it does provide encouraging evidence that system designers can learn from past security vulnerabilities and develop more secure systems. We have no doubts, however, that system designers will continue to relearn these principles for many years to come.


This research paper is a comparative study of two programming languages, Sun’s Java and Microsoft’s .NET. It clearly explains what are the pros and cons of the different PLs. According to the research, many systems have been executing untrusted programs in virtual machines (VMs) to limit their access to system resources. Sun introduced the Java VM in 1995, primarily intended as a lightweight platform for execution of untrusted code inside web pages. More recently, Microsoft developed the .NET platform with similar goals. Both platforms share many design and implementation properties, but there are key differences between Java and .NET that have an impact on their security. This paper examines how .NET’s design avoids vulnerabilities and limitations discovered in Java and discusses lessons learned (and missed) from experience with Java security.
Back to top Go down
View user profile

Posts : 30
Points : 39
Join date : 2009-06-19
Age : 28
Location : davao city

PostSubject: Assignment   Fri Sep 04, 2009 6:49 pm

Testing Times for Trojans
Ian Whalley
IBM TJ Watson Research Center, PO Box 704, Yorktown Heights, NY 10598, USA


In the field of computing, Trojan horses have been around for even longer than computer viruses – but traditionally have been less of a cause for concern amongst the community of PC users. In recent years, however, they have been the focus of increased attention from anti-virus companies and heightened levels of user concern.
This paper aims to investigate the Trojan phenomenon; particular attention will be paid to the claims made in the field of NVM detection and those made by those who aim to test the vendors’ claims.
In addition, various attempts to define Trojan horses will be evaluated, and a new definition will be suggested.


The continuing interest in anti-Trojan testing seems doomed to continue, regardless of whether or not the average user is actually at any sort of risk from Trojans.
The current standard of anti-Trojan testing can be improved to a certain extent by careful justification and documentation of samples and test-sets.
Even with such precautions, the contents of the test-sets will always be a matter for controversy. The subjectivity of the all definitions of Trojan will inevitably lead to disputes concerning whether not certain files are appropriate for inclusion in a test-set.


The Future of Viruses on the Internet
By David Chess


This very present time, computer viruses are steady low-level frustrations. Every company knows that it must have anti-virus software, and that virus protection is an asking price of doing businesses, that is the very reason in order to prevent those viruses. These involvements of viruses have only one reason and that is in order to spread those infections in the systems that may destroy the functionality of the systems. But before this happen those viruses that came also from the internet, in several organizations that do centralized incident management and reports has of course anti-virus software well deployed, most incidents involve only one or two systems, and the virus is caught before it can spread farther. So those viruses are just being disinfected automatically with the use of that anti-virus software.


If new viruses are discovered, anti-virus software is also being updated to deal with them on a cycle of weeks or even in months, that really happens all the time. So by this these viruses is just like business that when the problems occur automatically there are also updates in order to give solutions on them. The Internet only serves/plays a comparatively small role of spreading of those treats/viruses and of course gives also some detection and gives preventions in order that these viruses will not be spread throughout the systems.

Evaluating the Research on Violent Video Games
Jonathan L. Freedman
Department of Psychology
University of Toronto


As human beings, we have difficulty accepting random or senseless occurrences. We want to understand why something has happened, and the strength of this desire seems to be proportional to the horror of the event. When a horrible crime occurs, we want to know why. If it was related to drugs or gangs or an armed robbery, I think we find those sufficient reasons. We do not hate the crime less, but at least we think we know why it occurred.
Most of the non-experimental work consists of relatively small-scale surveys. People are asked about their exposure to video games, to violent video games, and to various other media. They are also asked about their aggressive behavior, or occasionally others provide information on the respondents’ aggressive behavior. Then the researchers conduct co relational analyses (or other similar analyses) to see if those who are exposed more too violent video games are more aggressive than those who are exposed less. Sometimes more detailed analyses are conducted to see if other factors mediate or reduce any relation that is found.


Insufficient attention has been paid to choosing games that are as similar as possible except for the presence of violence; virtually no attention has been paid to eliminating or at least minimizing experimenter demand; and the measures of aggression are either remote from aggression or of questionable value. . There is substantial, though far from overwhelming or definitive evidence that people who like and play violent video games tend to be more aggressive than those who like and play them less.

Back to top Go down
View user profile
Melgar John Gascal

Posts : 13
Points : 16
Join date : 2009-06-19
Age : 28

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Sat Oct 03, 2009 10:23 am

Technologically Enabled Crime: Shifting Paradigms for the Year 2000
By Sarah Gordon


This paper will consider the social and ethical factors involved in the transmission of computer viruses and other malicious software. In addition to the people, we will consider the part the systems and technology play in the spread of this sort of data. We will draw parallels with one of the more well known scientific paradigms, the medical one, and note the similarities with the problems we now face. We will describe the evolution of methods of virus distribution: virus exchange bulletin boards, virus exchange networks, distribution sites, robots/servers, and books. The paper will discuss viruses for sale and make some comparisons between distribution of computer viruses and the distribution methods of ``hacking tools''. Other issues examined in this paper include the characteristics of individuals involved in the distribution of these types of programs, and problems of legal redress, as well as possible solutions based on ethics and ethical theory.


The abstract of this research clearly explained the thought of the research. It also stated the steps and methods that the researchers had undergone in completing this research. I found this research quite interesting since for all know, viruses are everywhere. As an IT student, I think we ought to know the evolution of viruses and how these are being distributed all over the world. Other than that, we also need to know how we can address and provide solutions to these virus problems.

Reference: http://www.research.ibm.com/antivirus/SciPapers/Gordon/Crime.html

Hoaxes & Hypes
By Sarah Gordon, IBM T.J.Watson Research Center, Richard Ford,
and Joe Wells, Wells Research


Virus hoaxes and virus hypes are new and growing problems in the corporate environment, where the spread of such rumors can cause as much disruption as actual virus outbreaks. We review a number of recent examples of hoax and hype, and show that hoaxes that become widespread have certain characteristics that promote their spread. Using these characteristics, it is possible to create a set of rules which will help to distinguish fabrication from fact. Similarly, virus hype, often generated by the anti-virus industry or well-meaning members of the media, portrays real but insignificant viruses as doomsday threats. We show how such hype is almost always wrong. Finally, we discuss corporate policies that have been proven to minimize the disruption of hoaxes and hype, and give corporate anti-virus administrators a wealth of information resources to which they can turn as new hoaxes and hype come to light.


Personally, I like the way the researchers wrote this research because they just used simple terms in which I think everybody can easily understand what they wanted to imply. The whole content of the paper was clearly defined and summarized at the abstract of the paper. They also conducted a research on the types and common kinds of Hypes and Hoaxes and defined and explained each in a very nice manner. Comparing its characteristics and contrasting each was the best way to let the readers understand what these terms and types are.

At first, they identified and examined the types of non-computer related type of hypes and hoaxes, and then on the later part, they gave an example of each computer virus type and hoax which for me, is a good way to start a research. In this way, the readers are given an idea on what these really are.

After reading the whole paper, all I can say is that the most powerful form of defense against hoaxes is to build up a set of trusted sources of information which have a good track record for accuracy. This approach alone, along with a good measure of skepticism, will protect us from the vast majority of all virus misinformation circulated.

Reference: http://www.research.ibm.com/antivirus/SciPapers/Gordon/HH.html

The Emerging Economic Paradigm of Open Source
Bruce Perens
Senior Research Scientist, Open Source
Cyber Security Policy Research Institute, George Washington University.
Last edited: Wed Feb 16 06:22:06 PST 2005


Open Source developers have, perhaps without conscious intent, created a new and surprisingly successful economic paradigm for the production of software. Examining that paradigm can answer a number of important questions.

It's not immediately obvious how Open Source works economically. Probably the worst consequence of this lack of understanding is that many people don't understand how Open Source could be economically sustainable, and some may even feel that its potential negative effect upon the proprietary software industry is an overall economic detriment. Fortunately, if you look more deeply into the economic function of software in general, it's easy to establish that Open Source is both sustainable and of tremendous benefit to the overall economy.

Open Source can be explained entirely within the context of conventional open-market economics. Indeed, it turns out that it has much stronger ties to the phenomenon of capitalism than you may have appreciated.


Personally, I had a hard time understanding what the writer would like to imply in his research. Although he just used simple words, but the whole content wasn’t clearly defined in the abstract of the paper. But on the other hand, I liked how the writer had defined and explained the impact of Open Source to the economy. It is a fact that Open Source enables a majority of web servers today, a majority of email deliveries, and many other businesses, organizations, and personal pursuits. Thus, its economic impact must already be numbered in many tens of Billions of dollars. Any improvement in technology that permits business to function more efficiently means the economy runs more efficiently. In this case, Open Source enables business to spend less on software and to have better quality and more control over its software. The money that is saved on software doesn't disappear, the people who save it spend it on things that are more important to them.

Reference: http://perens.com/Articles/Economic.html

Back to top Go down
View user profile http://glamerj.blogspot.com
ace sandoval

Posts : 18
Points : 30
Join date : 2009-06-23
Age : 28
Location : Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Sun Oct 04, 2009 7:39 pm

Research Paper No. 1549
Information Sharing in a Supply Chain
L. and Seungjin

Advances in information system technology have had a huge impact on the evolution of supply chain management. As a result of such technological advances, supply chain partners can now work in tight coordination to optimize the chain-wide performance, and the realized return may be shared among the partners. A basic enabler for tight coordination is information sharing, which has been greatly facilitated by the advances in information technology. This paper describes the types of information shared: inventory, sales, demand forecast, order status, and production schedule. We discuss how and why this information is shared using industry examples and relating them to academic research. We also discuss three alternative system models of information sharing – the Information Transfer model, the Third Party Model, and the Information Hub Model.

This paper was all about information sharing. The Abstract was brief and precise. The paper did not follow the standard format that I know, it has its own format to express and explain well the model, types, and constraints of information sharing.
The paper is organized as follows. Section1 was the Introduction Section 2 describes the types of information shared and the associated benefits. Section 3 discusses alternative system models to facilitate information sharing. Section 4 addresses the challenges of information sharing.
Regarding the presentation of the paper it is not well arranged, the survey results were on the last part of the paper. While the references was on the upper part. The paper uses many examples to illustrate each model of information sharing and types of shared information.

Wait-free Programming for General Purpose Computations on Graphics Processors

This paper aims at bridging the gap between the lack of synchronization mechanisms in recent GPU architectures and the need of synchronization mechanisms in parallel applications. Based on the intrinsic features of recent GPU architectures, the researchers construct strong synchronization objects like wait-free and t-resilient read-modify-write objects for a general model of recent GPU architectures without strong hardware synchronization primitives like test-andset and compare-and-swap. Accesses to the wait-free objects have time complexity O(N), whether N is the number of processes. The fact that graphics processors (GPUs) are today's most powerful computational hardware for the dollar has motivated researchers to utilize the ubiquitous and powerful GPUs for general-purpose computing. Recent GPUs feature the single-program multiple-data (SPMD) multicore architecture instead of the single-instruction multiple-data (SIMD). However, unlike CPUs, GPUs devote their transistors mainly to data processing rather than data caching and flow control, and consequently most of the powerful GPUs with many cores do not support any synchronization mechanisms between their cores. This prevents GPUs from being deployed more widely for general-purpose computing.

This paper was more on the algorithms, at first look it is really complicated, but it is well explained by the figures and formula on how they come up with it to have the desired results. I also noticed that they used statements like if, if else statement and also for loop.
The result demonstrates that it is possible to construct wait-free synchronization mechanisms for graphics processors (GPUs) without the need of strong synchronization primitives in hardware and that wait-free programming is possible for graphics processors (GPUs). Most of the paper content was algorithms of process.

Map-Reduce for Machine Learning on Multicore

We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

The paper focuses on developing a general and exact technique for parallel programming of a large class of machine learning algorithms for multi-core processors. The Abstract was brief and precise. The paper follows the standard format. Also use graph, formula and statistical models that are easy to understand. The paper shows a good theoretical computational complexity results.
Back to top Go down
View user profile

Posts : 21
Points : 30
Join date : 2009-06-23
Age : 28
Location : Panabo City

PostSubject: assign1   Tue Oct 06, 2009 9:42 am

Hamad I. Odhabi
Ray J. Paul
Robert D. Macredie

Centre for Applied Simulation Modelling (CASM)
Department of information Systems and Computing
Brunel University
Uxbridge, Middlesex UB8 3PH, UNITE KINGDOM

This paper investigates the attempt to combine some different tools in order to build or make a simulation environment that can be used to model complex system. The tools used in this project or research are the four phase method, a simulation world view derived from the three phase approach especially for Object-Oriented and iconic and arcs; iconic representation that represents the actual system components and logic through using icons and arcs; Object-Oriented Programming and the MODSIM simulation library. They on using the discrete-event simulation modeling since it offers the people the chance to develop an understanding of their problem domain by building a simulation of the problem space in which they are interested. There are three broad perspectives that also being used as a basis of this paper which focuses on how to approach the development of simulation models.
The first perspective focuses on using a graphical user interface also known as GUI that allows user to build the model on the screen, connect the components by arcs to represent the model logic, and run the simulation (Drury and Laughery 1994). The second perspective is underpinned by the belief that no simulation program is able to model all types of system behavior without making some simplifications or modifications (Joines 1994). And lastly, the third perspective on which the paper will fully focus , concentrates on using a GUI that is able to automatically generate code, with the modeler making changes to the generated code to match the system needs (Hlupic and Paul 1994). The researchers are more concerned with two basic issues.
The first one concerns the modeling approach. Several programming approaches, often known as ‘simulation world views’, have been designed for discrete-event simulation modeling. The aim of any approach used should be to aid the production of a valid, working simulation at minimum cost or in shortest time (Pidd 1992.a)
The second issue concerns with the effect that cost of modifying the generated code which is the programming me4thodology. There are specific methodologies, reflecting particular programming paradigms which may support simplified model code comprehension, and therefore maintenance. Object-Oriented Programming (OOP), for example, has become popular in simulation modeling (Kienbaum and Paul 1994b), with a claim for relative ease of maintenance being made for the approach.
To address the both issues of modeling approach and programming methodology, this research was made. They introduce a new simulation world view termed the Four Phase Method (FPM), and discuss the importance in the context of iconic representations and the automatic generation of code.
The aim of the research is to attempt to combine a new simulation world view, OOP, and iconic representation to construct a simulation environment for the development of discrete-event simulation models. The modeling environment should be able to model complex system behavior, provide the user with a simple iconic representation to ‘drive’ the model design, and generate understandable code.


This research paper was actually, as stated above, about simulation which will create a new simulation world view which they named as the Four Phase Method (FPM). In which, they aimed to combine a new simulation world view or FPM, OOP, and iconic representation to construct a simulation environment for the development of discrete-event simulation models. It is where the modeling environment should be able to model complex systems behavior, provide the user with a simple iconic representation to ‘drive’ the model design, and generate understandable code.
The research paper was very organized from the introduction until to the conclusion of the paper. The variables are carefully explained and evaluated maybe because the writers/researchers are shall we say experts on the information technology industry. In model lay-outing and problem description, they stated that the problem being investigated is not of principal importance to the work, since it is mainly concerned with the modeling process on its own right. It could be an investigation of production quantity or the calculation of the average time that is required to produce a product. Identifying their problem in my point of view is somewhat unclear there were terms that as a normal person will be unclear. Since not all understands technical term.


Javier Faulin

Department of Statistics and OR
Campus Arrosadia
Public University of Navarre
Pamplona, Navarre 31006, SPAIN

Angel A. Juan
Carles Serrat

Department of Applied Mathematics I
Av. Doctor Marañon, 44-50
Technical University of Catalonia
Barcelona, 08028, SPAIN

Vicente Bargueño

Department of Applied Mathematics I
ETS Ingenieros Industriales
Universidad Nacional de Educacion a Distancia
Madrid, 28080, SPAIN

This paper presents the researchers’ basic ideas behind a simulation-based method, called SAEDES, which can very useful when determining the availability for a wide range of complex systems. The method is implemented in C/C++ using two different algorithms, SAEDES_A1 (component-oriented) and SAEDES_A2 (system-oriented).The two case-studies are introduced and analyzed using both algorithms, which allows them to compare the associated results and became the basis. The ultimate objective of this method is to determine or estimate a complex system availability using the following information which is assumed to be known: system logical structure and failure-times and repair-times distributions for each component. The method is implemented using two different algorithms, SAEDES_A1 which uses MS and can be considered as component-oriented in the sense that it is based on the generation of each component history) and SAEDES_A2 which uses DES and can be considered as system-oriented in the sense that it is based on the generation of the system history.
The method presented in this paper, SAEDES, has been designed to deal with any kind of logical or physical system that meets some general criteria. The following assumptions are made:
1. Two-state systems: at any given time, the system will be either operational which is working properly or not
2. Coherent systems: the analyzed system is assumed to be coherent, in other words: if every component is operative the system will be operative, if no component is operative the system will not be operative, and a positive status change in a component (that is, from inoperative to operative) cannot cause a negative status change in the system (that is, the system will not change its status from operative to no operative)
3. Minimal paths decomposition: the system logical structure is known and it can be expressed in the form of minimal paths
4. Component failure-times and repair-times distributions: for each component, its associated failure-times and repair-times distributions are perfectly known
5. Maintainability policy: system is under a continuous inspection policy, that is any failure will be detected as soon as it will appear
6. Perfect reparations or substitutions: when a component fails, it is repaired or substituted by a new one; in any case, the result is as if a new component has been placed
7. Failure-times and repair-times independence: the failure-times associated to one specific component are independent from the failure-times associated to any other component; the same holds true for repair times.

Assumptions (1) to (4) guarantee that there is enough information to study the system reliability. Assumption (3) often requires a detailed analysis of logical relationships among components. In this sense, simulation algorithms have been proposed to find out the minimal path decomposition of a complex system (Lin and Donaghey 1993). In the assumption (4) context, statistical methods such as accelerated live tests (Meeker and Escobar 1998) and data fitting techniques (Leemis 2003) are usually required. Assumptions (5) and (6) are not restrictive in the sense that they could be relaxed, if necessary, by adapting the algorithms of the method.
Finally, assumption (7) is the most restrictive one and it may require considering some abstraction levels in the system decomposition.

SAEDES method and algorithms make use of several mathematical concepts and techniques. Specifically, the method is based on:
• System availability theory: system reliability and availability concepts, including minimal paths
theory (Barlow and Proschan 1996, Hoyland and Rausand 1994, Kovalenko et al. 1997, Pham (2003)
• Simulation techniques: data fitting, pseudorandom number generation, event treatment, and variance reduction methods (Banks 1998, Chung 2004, Law and Kelton 2000, L’Ecuyer 2002, Wang and Pham 1997)
• Probability and statistical concepts: probability theory, descriptive statistics and inference techniques (Ross 1996).

SAEDES can be very helpful for system managers and engineers in determining and improving complex systems availability. SAEDES is able to provide useful information about complex systems availability and can be applied in most situations where analytical methods are not well suited. Two different and alternative algorithms have been developed to perform SAEDES core functions. Both algorithms have been implemented as computer programs and used separately to analyze different complex systems. Different case studies have been conducted, showing that results from both algorithms are convergent, which contributes to validate the method and to add credibility to it.


Hmmmn.. This research paper (for me) is actually good. It’s just there are parts of the paper that is unclear such as using an acronym which they doesn’t include the meaning of it. If I were an ordinary person with no knowledge about technical term used in a computer industry, I won’t be able to understand it. I think the author should put the meaning in every acronym they used. Well, it’s just my own opinion.

Environmental Tobacco Smoke and Tobacco Related Mortality in a Prospective Study of Californians, 1960-98

James E. Enstrom
Geoffrey C. Kabat

The paper tackles about the tobacco which the main objective is to measure the relation between environmental tobacco smoke, as estimated by smoking in spouses, and long term mortality from tobacco related disease. This paper covers 39 years of prospective cohort study. Their main outcome measures the relative risks and the large percentage confidence intervals for deaths from coronary heart disease, lung cancer, and chronic obstructive pulmonary disease related to smoking in spouses and active cigarette smoking. The results for participants followed from 1960 until 1998 the age adjusted relative risk (95% confidence interval) for never smokers married to ever smokers compared with never smokers married to never smokers was 0.94 (0.85 to 1.05) for coronary heart disease, 0.75 (0.42 to 1.35) for lung cancer, and 1.27 (0.78 to 2.08) for chronic obstructive pulmonary disease among 9619 men, and 1.01 (0.94 to 1.08), 0.99 (0.72 to 1.37), and 1.13 (0.80 to 1.58), respectively, among 25 942 women. No significant associations were found for current or former exposure to environmental tobacco smoke before or after adjusting for seven confounders and before or after excluding participants with pre-existing disease. No significant associations were found during the shorter follow up periods of 1960-5, 1966-72, 1973-85, and 1973-98.As years pass by, the percentage of smoking tobacco becomes bigger and the risk of death becomes larger. As a conclusion, the results of their survey do not support a casual relation between environmental tobacco smoke and coronary heart disease and lung cancer may be considerably weaker than generally believed.


Actually, I was a little bit confused here not because of the research paper but because of the questions that pops up in my mind. I was confused because I was thinking if the research I summarized could be categorized as a scientific research. Well when I search the word “sample scientific research”, it appears. I guess it’s one of the “scientific researches”. Going back to the paper, figures and the way it is presented is good enough to understand by any individual.
Back to top Go down
View user profile http://princessekeanne.blogspot.com
Jonel Amora

Posts : 53
Points : 61
Join date : 2009-06-23
Age : 26
Location : Davao City

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Sat Oct 10, 2009 1:17 pm

Growth and Mineralization Characteristics of
Toluene and Diesel-Degrading Bacteria
From Williams Refinery Groundwater


Before bioremediation can be used to clean up a contaminated site, one must understand the characteristics of the potential bioremediating bacteria. It is important that the chosen bacteria have characteristics suitable for the contaminated environment. Since these diesel-degrading bacteria have evolved to grow optimally at cold temperatures, they may be useful in the bioremediation of hydrocarbon-contaminated groundwater with similar environmental characteristics. Given that many contaminated sites are underground, where no oxygen gas is present, the nitrate-reducing, diesel-degrading bacteria could be used to remediate these sites. While the anaerobic bioremediation method is cheaper than the aerobic method, the aerobic method has the advantage of being able to clean up the site faster. If these bacteria were used in bioremediation, scientists would need to determine whether money or time is more important.


This paper is really useful especially almost all of the households here in the Philippines uses storage tanks containing petroleum products. These tanks may leak; contaminants frequently make their way into underground water supplies, or aquifers. Though the research was not really based on our country it can still be a good reference if someone would like to make a research about this in the Philippines because temperature here is really a big factor. The research paper has a good discussion about bioremediation, very informative.

Computerized anthropometric analysis of the Man of the Turin Shroud
Giulio Fanti, Emanuela Marinelli, Alessandro Cagnazzo,


An anthropometric analysis of the Man of the Shroud was carried out making comparisons with bibliographic data and experimental research. The images were acquired and elaborated to point out the outlines of the two imprints and to carry out the measurements corrected following the systematic effects found, like for instance those due to the cloth-body wrapping effect. The height of the Man of the Shroud was obtained both directly measuring with digital techniques and comparing the most significant anthropometric indices with bibliographic data, and imposing the same kinematic conditions (angles of the knees and feet) in the frontal and dorsal imprint. From the comparison among the anthropometric indices characteristic of different human races with those of the Man of the Shroud it was possible to point out that the Semitic race is the closest one to the characteristics obtained. The tibio-femoral index, one of the most significant, calculated for the Man of the Shroud (equal to 83% ±3%) is completely compatible with the mean one quoted in bibliography (equal 15 to 82.3%), the tibio-femoral index measured on three different copies of the Shroud (respectively equal to 115%, 105%, 103% ±4%) showed the incompatibility of the images painted by artists who at that time did not have enough anatomic knowledge. The height of the Man of the Shroud turned out to be 174±2 cm, the rotation angle of the knee () equal to 24±2° and the rotation angle of the foot equal to 25±2°. The frontal and dorsal imprints of the Man of the Shroud are anatomically superimposable.


At first, I was not able to understand what this study really is. As I read the contents, I was amazed that this study was just to measure the height of the Man of the Turin Shroud (corpse of Jesus Christ). So I started reading the contents more attentively and they have a really good analysis on how they come up with what they think is the right height of the Shroud. They computed the height with lots of mathematical formulas and equations for the knee is slightly bended and the head is slightly bent forward. They also observed that the linen was not in contact with the actual body that is why getting the height straightforward would mean error.

The paper was clearly presented by the author. All computations are clearly stated with diagrams and pictures to understand it easier by the readers.

Browser Speed Comparisons
Mark Wilton-Jones


So overall, Opera seems to be the fastest browser for Windows. Firefox is not faster than Internet Explorer, except for scripting, but for standards support, security and features, it is a better choice. However, it is still not as fast as Opera, and Opera also offers a high level of standards support, security and features.
On Linux, Konqueror is the fastest for starting and viewing basic pages on KDE, but as soon as script or images are involved, or you want to use the back or forward buttons, or if you use Gnome, Opera is a faster choice, even though on KDE it will take a few seconds longer to start. Mozilla and Firefox give an overall good performance, but their script, cache handling and image-based page speed still cannot compare with Opera.
On Mac OS X, Opera and Safari are both very fast, with Safari 2 being faster at starting and rendering CSS, but with Opera still being distinguishably faster for rendering tables, scripting and history (especially compared with the much slower Safari 1.2). Camino 0.8 is fast to start, but then it joins its sisters Mozilla and Firefox further down the list. Neither Mozilla, Firefox nor IE performs very well on Mac, being generally slower than on other operating systems.
On Mac OS 9, no single browser stands out as the fastest. In fact, my condolences to anyone who has to use one of them, they all perform badly.


The research done was in a much fairer environment. Each test has a careful set of rules to make sure that I give unbiased results. Each set of tests are grouped by platform, and for each platform, I use just one computer to ensure that the tests compare just the browsers, and not the hardware or software they run on. Each test is done with a default browser install, without tweaking any settings (he knows that many browsers perform slightly better if you tweak their network settings, but this is intended to be a test of a standard browser install - some people also suggest that using a native skin makes a browser faster, but I got virtually identical results with native and non-native skins). With Browsers that also offer email or news features, He enabled these clients, but did not have any email or news items in them (some of them may perform differently if they did, but that is not what I was trying to test). He based the test on the how each browsers performs on its major tasks that a browser is expected to perform. The basic requirements were HTML, CSS, JavaScript, basic DHTML and images. He tested each browser on its speed in rendering CSS, rendering tables, having a cold start, loading the warm start, scripting speed, loading multiple images and loading history. Overall, the research was really a big help if a user is really concern about how fast he surfs the net. It is easy to understand. Results are presented in a way that readers can understand it easily.
Back to top Go down
View user profile http://jdamora.blogspot.com/
Sponsored content

PostSubject: Re: Assignment 1 (Due: before July 14, 2009, 13:00hrs)   Today at 3:07 am

Back to top Go down
Assignment 1 (Due: before July 14, 2009, 13:00hrs)
View previous topic View next topic Back to top 
Page 1 of 1
 Similar topics
» reinstituting selective pre-audit on government transactions
» Amendments to the Revised IRR through GPPB Resolution No. 06-2009
» A Boutique and an assignment against me.
» New Baby, New Assignment, Old Dress
» 09th July 2011 - London UK - Masjid Al Ansar presents The Key to Paradise by Sh. Mohamad Salah

Permissions in this forum:You cannot reply to topics in this forum
USEP-IC  :: Methods of Research-
Jump to: