How to code ?

Summary: This article provides coding techniques and programming practices for improving the quality of source code. (12 printed pages)


Superior coding techniques and programming practices are hallmarks of a professional programmer. The bulk of programming consists of making a large number of small choices while attempting to solve a larger set of problems. How wisely those choices are made depends largely upon the programmer’s skill and expertise.

This document addresses some fundamental coding techniques and provides a collection of coding practices from which to learn. The coding techniques are primarily those that improve the readability and maintainability of code, whereas the programming practices are mostly performance enhancements.

The readability of source code has a direct impact on how well a developer comprehends a software system. Code maintainability refers to how easily that software system can be changed to add new features, modify existing features, fix bugs, or improve performance. Although readability and maintainability are the result of many factors, one particular facet of software development upon which all developers have an influence is coding technique. The easiest method to ensure that a team of developers will yield quality code is to establish a coding standard, which is then enforced at routine code reviews.

  • Coding Standards and Code Reviews
  • Coding Techniques
  • Best Practices
  • Conclusion
  • Suggested Reading

Coding Standards and Code Reviews

A comprehensive coding standard encompasses all aspects of code construction and, while developers should exercise prudence in its implementation, it should be closely followed. Completed source code should reflect a harmonized style, as if a single developer wrote the code in one session. At the inception of a software project, establish a coding standard to ensure that all developers on the project are working in concert. When the software project will incorporate existing source code, or when performing maintenance upon an existing software system, the coding standard should state how to deal with the existing code base.

Although the primary purpose for conducting code reviews throughout the development life cycle is to identify defects in the code, the reviews can also be used to enforce coding standards in a uniform manner. Adherence to a coding standard can only be feasible when followed throughout the software project from inception to completion. It is not practical, nor is it prudent, to impose a coding standard after the fact.

Coding Techniques

Coding techniques incorporate many facets of software development and, although they usually have no impact on the functionality of the application, they contribute to an improved comprehension of source code. For the purpose of this document, all forms of source code are considered, including programming, scripting, markup, and query languages.

The coding techniques defined here are not proposed to form an inflexible set of coding standards. Rather, they are meant to serve as a guide for developing a coding standard for a specific software project.

The coding techniques are divided into three sections:

  • Names
  • Comments
  • Format


Perhaps one of the most influential aids to understanding the logical flow of an application is how the various elements of the application are named. A name should tell “what” rather than “how.” By avoiding names that expose the underlying implementation, which can change, you preserve a layer of abstraction that simplifies the complexity. For example, you could use GetNextStudent() instead of GetNextArrayElement().

A tenet of naming is that difficulty in selecting a proper name may indicate that you need to further analyze or define the purpose of an item. Make names long enough to be meaningful but short enough to avoid being wordy. Programmatically, a unique name serves only to differentiate one item from another. Expressive names function as an aid to the human reader; therefore, it makes sense to provide a name that the human reader can comprehend. However, be certain that the names chosen are in compliance with the applicable language’s rules and standards.

Following are recommended naming techniques:


  • Avoid elusive names that are open to subjective interpretation, such as Analyze() for a routine, or xxK8 for a variable. Such names contribute to ambiguity more than abstraction.
  • In object-oriented languages, it is redundant to include class names in the name of class properties, such as Book.BookTitle. Instead, useBook.Title.
  • Use the verb-noun method for naming routines that perform some operation on a given object, such as CalculateInvoiceTotal().
  • In languages that permit function overloading, all overloads should perform a similar function. For those languages that do not permit function overloading, establish a naming standard that relates similar functions.


  • Append computation qualifiers (Avg, Sum, Min, Max, Index) to the end of a variable name where appropriate.
  • Use customary opposite pairs in variable names, such as min/max, begin/end, and open/close.
  • Since most names are constructed by concatenating several words together, use mixed-case formatting to simplify reading them. In addition, to help distinguish between variables and routines, use Pascal casing (CalculateInvoiceTotal) for routine names where the first letter of each word is capitalized. For variable names, use camel casing (documentFormatType) where the first letter of each word except the first is capitalized.
  • Boolean variable names should contain Is which implies Yes/No or True/False values, such as fileIsFound.
  • Avoid using terms such as Flag when naming status variables, which differ from Boolean variables in that they may have more than two possible values. Instead of documentFlag, use a more descriptive name such as documentFormatType.
  • Even for a short-lived variable that may appear in only a few lines of code, still use a meaningful name. Use single-letter variable names, such as i, or j, for short-loop indexes only.
  • If using Charles Simonyi’s Hungarian Naming Convention, or some derivative thereof, develop a list of standard prefixes for the project to help developers consistently name variables. For more information, see “Hungarian Notation.”
  • For variable names, it is sometimes useful to include notation that indicates the scope of the variable, such as prefixing a g_ for global variables and m_ for module-level variables in Microsoft Visual Basic®.
  • Constants should be all uppercase with underscores between words, such as NUM_DAYS_IN_WEEK. Also, begin groups of enumerated types with a common prefix, such as FONT_ARIAL and FONT_ROMAN.


  • When naming tables, express the name in the singular form. For example, use Employee instead of Employees.
  • When naming columns of tables, do not repeat the table name; for example, avoid having a field called EmployeeLastName in a table calledEmployee.
  • Do not incorporate the data type in the name of a column. This will reduce the amount of work needed should it become necessary to change the data type later.

Microsoft SQL Server

  • Do not prefix stored procedures with sp_, because this prefix is reserved for identifying system-stored procedures.
  • In Transact-SQL, do not prefix variables with @@, which should be reserved for truly global variables such as @@IDENTITY.


  • Minimize the use of abbreviations. If abbreviations are used, be consistent in their use. An abbreviation should have only one meaning and likewise, each abbreviated word should have only one abbreviation. For example, if using min to abbreviate minimum, do so everywhere and do not later use it to abbreviate minute.
  • When naming functions, include a description of the value being returned, such as GetCurrentWindowName().
  • File and folder names, like procedure names, should accurately describe what purpose they serve.
  • Avoid reusing names for different elements, such as a routine called ProcessSales() and a variable called iProcessSales.
  • Avoid homonyms when naming elements to prevent confusion during code reviews, such as write and right.
  • When naming elements, avoid using commonly misspelled words. Also, be aware of differences that exist between American and British English, such as color/colour and check/cheque.
  • Avoid using typographical marks to identify data types, such as $ for strings or % for integers.


Software documentation exists in two forms, external and internal. External documentation is maintained outside of the source code, such as specifications, help files, and design documents. Internal documentation is composed of comments that developers write within the source code at development time.

One of the challenges of software documentation is ensuring that the comments are maintained and updated in parallel with the source code. Although properly commenting source code serves no purpose at run time, it is invaluable to a developer who must maintain a particularly intricate or cumbersome piece of software.

Following are recommended commenting techniques:

  • When modifying code, always keep the commenting around it up to date.
  • At the beginning of every routine, it is helpful to provide standard, boilerplate comments, indicating the routine’s purpose, assumptions, and limitations. A boilerplate comment should be a brief introduction to understand why the routine exists and what it can do.
  • Avoid adding comments at the end of a line of code; end-line comments make code more difficult to read. However, end-line comments are appropriate when annotating variable declarations. In this case, align all end-line comments at a common tab stop.
  • Avoid using clutter comments, such as an entire line of asterisks. Instead, use white space to separate comments from code.
  • Avoid surrounding a block comment with a typographical frame. It may look attractive, but it is difficult to maintain.
  • Prior to deployment, remove all temporary or extraneous comments to avoid confusion during future maintenance work.
  • If you need comments to explain a complex section of code, examine the code to determine if you should rewrite it. If at all possible, do not document bad code—rewrite it. Although performance should not typically be sacrificed to make the code simpler for human consumption, a balance must be maintained between performance and maintainability.
  • Use complete sentences when writing comments. Comments should clarify the code, not add ambiguity.
  • Comment as you code, because most likely there won’t be time to do it later. Also, should you get a chance to revisit code you’ve written, that which is obvious today probably won’t be obvious six weeks from now.
  • Avoid the use of superfluous or inappropriate comments, such as humorous sidebar remarks.
  • Use comments to explain the intent of the code. They should not serve as inline translations of the code.
  • Comment anything that is not readily obvious in the code.
  • To prevent recurring problems, always use comments on bug fixes and work-around code, especially in a team environment.
  • Use comments on code that consists of loops and logic branches. These are key areas that will assist the reader when reading source code.
  • Separate comments from comment delimiters with white space. Doing so will make comments stand out and easier to locate when viewed without color clues.
  • Throughout the application, construct comments using a uniform style, with consistent punctuation and structure.

Notes   Despite the availability of external documentation, source code listings should be able to stand on their own because hard-copy documentation can be misplaced.

External documentation should consist of specifications, design documents, change requests, bug history, and the coding standard that was used.


Formatting makes the logical organization of the code stand out. Taking the time to ensure that the source code is formatted in a consistent, logical manner is helpful to yourself and to other developers who must decipher the source code.

Following are recommended formatting techniques:

  • Establish a standard size for an indent, such as four spaces, and use it consistently. Align sections of code using the prescribed indentation.
  • Use a monospace font when publishing hard-copy versions of the source code.
  • Except for constants, which are best expressed in all uppercase characters with underscores, use mixed case instead of underscores to make names easier to read.
  • Align open and close braces vertically where brace pairs align, such as:
    for (i = 0; i < 100; i++)

    You can also use a slanting style, where open braces appear at the end of the line and close braces appear at the beginning of the line, such as:

    for (i = 0; i < 100; i++){

    Whichever style is chosen, use that style throughout the source code.

  • Indent code along the lines of logical construction. Without indenting, code becomes difficult to follow, such as:
    If … Then
    If … Then
    End If
    End If

    Indenting the code yields easier-to-read code, such as:

    If … Then
         If … Then
         End If
    End If
  • Establish a maximum line length for comments and code to avoid having to scroll the source code window and to allow for clean hard-copy presentation.
  • Use spaces before and after most operators when doing so does not alter the intent of the code. For example, an exception is the pointer notation used in C++.
  • Put a space after each comma in comma-delimited lists, such as array values and arguments, when doing so does not alter the intent of the code. For example, an exception is an ActiveX® Data Object (ADO) Connection argument.
  • Use white space to provide organizational clues to source code. Doing so creates “paragraphs” of code, which aid the reader in comprehending the logical segmenting of the software.
  • When a line is broken across several lines, make it obvious that the line is incomplete without the following line.
  • Where appropriate, avoid placing more than one statement per line. An exception is a loop in C, C++, Visual J++®, or JScript®, such as for (i = 0; i < 100; i++).
  • When writing HTML, establish a standard format for tags and attributes, such as using all uppercase for tags and all lowercase for attributes. As an alternative, adhere to the XHTML specification to ensure all HTML documents are valid. Although there are file size trade-offs to consider when creating Web pages, use quoted attribute values and closing tags to ease maintainability.
  • When writing SQL statements, use all uppercase for keywords and mixed case for database elements, such as tables, columns, and views.
  • Divide source code logically between physical files.
  • In ASP, use script delimiters around blocks of script rather than around each line of script or interspersing small HTML fragments with server-side scripting. Using script delimiters around each line or interspersing HTML fragments with server-side scripting increases the frequency of context switching on the server side, which hampers performance and degrades code readability.
  • Put each major SQL clause on a separate line so statements are easier to read and edit, for example:
    SELECT FirstName, LastName
    FROM Customers
    WHERE State = 'WA'
  • Do not use literal numbers or literal strings, such as For i = 1 To 7. Instead, use named constants, such as For i = 1 To NUM_DAYS_IN_WEEK, for ease of maintenance and understanding.
  • Break large, complex sections of code into smaller, comprehensible modules.

Programming Practices

Experienced developers follow numerous programming practices or rules of thumb, which typically derived from hard-learned lessons. The practices listed below are not all-inclusive, and should not be used without due consideration. Veteran programmers deviate from these practices on occasion, but not without careful consideration of the potential repercussions. Using the best programming practice in the wrong context can cause more harm than good.

  • To conserve resources, be selective in the choice of data type to ensure the size of a variable is not excessively large.
  • Keep the lifetime of variables as short as possible when the variables represent a finite resource for which there may be contention, such as a database connection.
  • Keep the scope of variables as small as possible to avoid confusion and to ensure maintainability. Also, when maintaining legacy source code, the potential for inadvertently breaking other parts of the code can be minimized if variable scope is limited.
  • Use variables and routines for one and only one purpose. In addition, avoid creating multipurpose routines that perform a variety of unrelated functions.
  • When writing classes, avoid the use of public variables. Instead, use procedures to provide a layer of encapsulation and also to allow an opportunity to validate value changes.
  • When using objects pooled by MTS, acquire resources as late as possible and release them as soon as possible. As such, you should create objects as late as possible, and destroy them as early as possible to free resources.
  • When using objects that are not being pooled by MTS, it is necessary to examine the expense of the object creation and the level of contention for resources to determine when resources should be acquired and released.
  • Use only one transaction scheme, such as MTS or SQL Server™, and minimize the scope and duration of transactions.
  • Be wary of using ASP Session variables in a Web farm environment. At a minimum, do not place objects in ASP Session variables because session state is stored on a single machine. Consider storing session state in a database instead.
  • Stateless components are preferred when scalability or performance are important. Design the components to accept all the needed values as input parameters instead of relying upon object properties when calling methods. Doing so eliminates the need to preserve object state between method calls. When it is necessary to maintain state, consider using alternative methods, such as maintaining state in a database.
  • Do not open data connections using a specific user’s credentials. Connections that have been opened using such credentials cannot be pooled and reused, thus losing the benefits of connection pooling.
  • Avoid the use of forced data conversion, sometimes referred to as variable coercion or casting, which may yield unanticipated results. This occurs when two or more variables of different data types are involved in the same expression. When it is necessary to perform a cast for other than a trivial reason, that reason should be provided in an accompanying comment.
  • Develop and use error-handling routines. For more information on error handling in Visual Basic, see the “Error Handling and Debugging”chapter of the Microsoft Office 2000/Visual Basic Programmer’s Guide, available in the MSDN Library. For more information on error handling and COM, see “Error Handling” in the Platform SDK. For more information on error handling for Web pages, see
  • Be specific when declaring objects, such as ADODB.Recordset instead of just Recordset, to avoid the risk of name collisions.
  • Require the use Option Explicit in Visual Basic and VBScript to encourage forethought in the use of variables and to minimize errors resulting from typographical errors.
  • Avoid the use of variables with application scope.
  • Use RETURN statements in stored procedures to help the calling program know whether the procedure worked properly.
  • Use early binding techniques whenever possible.
  • Use Select Case or Switch statements in lieu of repetitive checking of a common variable using If…Then statements.
  • Explicitly release object references.


  • Never use SELECT *. Always be explicit in which columns to retrieve and retrieve only the columns that are required.
  • Refer to fields implicitly; do not reference fields by their ordinal placement in a Recordset.
  • Use stored procedures in lieu of SQL statements in source code to leverage the performance gains they provide.
  • Use a stored procedure with output parameters instead of single-record SELECT statements when retrieving one row of data.
  • Verify the row count when performing DELETE operations.
  • Perform data validation at the client during data entry. Doing so avoids unnecessary round trips to the database with invalid data.
  • Avoid using functions in WHERE clauses.
  • If possible, specify the primary key in the WHERE clause when updating a single row.
  • When using LIKE, do not begin the string with a wildcard character because SQL Server will not be able to use indexes to search for matching values.
  • Use WITH RECOMPILE in CREATE PROC when a wide variety of arguments are passed, because the plan stored for the procedure might not be optimal for a given set of parameters.
  • Stored procedure execution is faster when you pass parameters by position (the order in which the parameters are declared in the stored procedure) rather than by name.
  • Use triggers only for data integrity enforcement and business rule processing and not to return information.
  • After each data modification statement inside a transaction, check for an error by testing the global variable @@ERROR.
  • Use forward-only/read-only recordsets. To update data, use SQL INSERT and UPDATE statements.
  • Never hold locks pending user input.
  • Use uncorrelated subqueries instead of correlated subqueries. Uncorrelated subqueries are those where the inner SELECT statement does not rely on the outer SELECT statement for information. In uncorrelated subqueries, the inner query is run once instead of being run for each row returned by the outer query.


  • Tune the RecordSet.CacheSize property to what is needed. Using too small or too large a setting will adversely impact the performance of an application.
  • Bind columns to field objects when looping through recordsets.
  • For Command objects, describe the parameters manually instead of using Parameters.Refresh to obtain parameter information.
  • Explicitly close ADO Recordset and Connection objects to insure that connections are promptly returned to the connection pool for use by other processes.
  • Use adExecuteNoRecords for non-row-returning commands.


Using solid coding techniques and good programming practices to create high quality code plays an important role in software quality and performance. In addition, by consistently applying a well-defined coding standard and proper coding techniques, and holding routine code reviews, a team of programmers working on a software project is more likely to yield a software system that is easier to comprehend and maintain.

Improve your algorithmic coding skills?

Hello friends ,here are a few things that you can try for the better understanding

  • Thoroughly look into what is implemented in c.
  • Implement all the algorithms you’ve learnt in as many ways as possible.
  • As many of you know implementation is main problem, you might be messing up with the corner cases. Be careful with them.
  • Try to solve codeforces contest archives. Spend some time on each of them and then look into the editorials. This will help a lot if you plan to take ACM ICPC.
  • You can also try out topcoder algorithm tutorials and contests.

Here are a few  useful sites for practising algorithmic programming.

  • HackerRank (previously )
    No-so-easy questions , most of them are challenging.
    Solving one question requires multiple concepts.
    Companies come for hiring via codesprints in interviewstreet
  • Sphere Online Judge (SPOJ) 
    Use to search for problems of a concept and then solve the questions
    Contains more than 10K questions
    has questions of all levels.
  • Codechef
    Contains questions of all levels.
    There will be monthly contests.
    They conduct non-programming contests once a while (may be around 3 months)
    Moderate and difficult questions.
    Very good community.
    High quality tutorials.
    There will around 2 contests per month.
    Contains questions of all levels(archives).
    (similar to topcoder)
    Contains many easy questions in the beginning. But the questions after the 200th are challenging.
    One can start with this (questions numbered <100) and move on to other sites once he/she gets confidence.
  • Contains many easy questions in the beginning.  But the questions after the 200th are challenging.One can start with this (questions numbered <100) and move on to other sites once he/she gets confidence.

How can I improve programming skills?

Start by carving out 20% of your time to devote to your own skills development. If possible, it’ll be better if that 20% comes from one or two hours a day rather than a day a week because you can then make a daily habit out of improving your skills. Your productivity may decrease initially (or it might not change much if you’re replacing web surfing or other distractions), but the goal is to make investments that will make you more effective in the long run.

So what should you do with that 20% time? Since you’re at a well-known tech company, you should take advantage of the resources available to you. Here are 10 suggestions:

  • Study code on core abstractions written by the best engineers at the company, and understand why certain choices were made and how they were implemented. For example, if you’re at Google, read through code in some of the core libraries written by early engineers like Jeff Dean, Craig Silverstein, or Ben Gomes. Start with ones that you’ve used before, and ask yourself if you would’ve written similar code for various parts and how you might learn from their examples.
  • If you’re not too efficient on your text editor or IDE, ask some of your more effective co-workers if they’d mind you watching them for a bit while they’re coding. Are they using keyboard shortcuts or editor functionality that you’re not familiar with but that make them much more efficient? If so, learn and practice them. Search for productivity tips on Google for whatever development environment you use. When I was first learning Emacs, for example, Steve Yegge’s very good 10 Specific Ways to Improve Your Productivity With Emacs made me noticeably more efficient. Even in re-reading parts of that page for this answer, I’ve learned something new.
  • Read through any technical, educational material available internally. Google, for instance, has a wide array of codelabs that teach core abstractions and high-quality guides of best practices that veteran engineers have written for various languages based on decades of experience. If your company doesn’t have similar resources, Google’s open sourced some of their guides:;.
  • Master the programming language(s) that you use. Read a good book or two on the languages. Focus on developing a solid grasp of the advanced concepts in that language, and gain familiarity with core, language libraries. Make sure that at least one of your languages is a scripting language (e.g. Python) that you can use as your Swiss army knife for quick tasks.
  • Send your code reviews to the harshest critics. Optimize for getting good, thoughtful feedback rather than for lowering the barrier to getting your work checked in. Ask for a more detailed review on implementations that you’re not too confident about.
  • Enroll in classes in areas you want to be stronger at. These could be ones offered on the company campus, on nearby university campuses, or online. Many of the larger tech companies will even pay for your classes. If you want to get better at programming, take more hands-on classes on topics like design patterns or on some programming language.
  • Build a good reading list of technical books, and start reading. Your company may even reimburse you. Here’s a start: What is the single most influential book every programmer should read?
  • Work on different types of projects, and use them as learning opportunities. If you find yourself always doing similar tasks using similar methods, it’s going to be hard to get out of your comfort zone and to pick up new skills.
    Make sure you’re on a team with at least a few senior engineers that you feel like you can learn from. If you’re not, consider changing projects or teams. This’ll help increase your learning rate for the remaining 80% of your time.
  • Write more code. If you feel like programming is your weak point, spend more of your time on building and writing code since you won’t get better unless you practice the craft. To the extent that you can, shift time away from other engineering-related duties like managing projects, fixing bugs, attending meetings, designing products, etc.
    Good luck!

Which computer language(s) is used to make WhatsApp


WhatsApp has been developed from the early days using open source software. WhatsApp engineers use, contribute to and release a lot of open source software.

We contribute to key projects

Our engineers are eager to contribute back to the open source community.

Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability.

FreeBSD is an advanced computer operating system used to power modern servers, desktops and embedded platforms.

jqGrid is an Ajax-enabled JavaScript control that provides solutions for representing and manipulating tabular data on the web.

libphonenumber is Google’s common Java, C++ and Javascript library for parsing, formatting, storing and validating international phone numbers.

LightOpenId is a PHP 5 library for easy openid authentication.

lighttpd is an open-source web server optimized for high performance environments while remaining standards-compliant, secure and flexible.

PHP is a popular general-purpose scripting language that is especially suited to web development.

yaws is a HTTP high performance 1.1 webserver particularly well suited for dynamic-content web applications.

Source: WhatsApp :: Open_source

IoT Security is everybody’s business!! – Part 2

We identified the risks and potential threats to our living in the part 1 of this blog. Let us discuss some of the preventive ways to secure our living in this part – some remedial steps which will help repose faith in the technology driven lives.

A study by Hewlett Packard shows that around 70% of the connected devices are prone to serious threats.  Many of the consumers of technology, roughly more than 76% do not understand or appreciate these risks. The attitude is – “.. it has not impacted me so far…”.

To deal with, let us identify the top 10 security issues with IoT to increase our awareness. These could be potential sources:

  Insufficient authentication or authorization
    Insecure Web interface
    Insecure network services
    Insufficient security configuration
    Privacy concerns
    Insecure mobile interface
    Lack of transport encryption
    Insecure software or firmware
    Insecure cloud interface
    Poor physical security

The above list, though not exhaustive, is definitely worth pondering.

All organizations rallying to be the top IoT product and solution providers must compel themselves to create the hard security platforms which will make the solutions bullet-proof for any vulnerability resulting thereof.

While everybody would love to believe prevention is better than cure, we cannot ignore detection and detention of rogue application creators/hackers/disruptors and the havoc-makers. The cyber laws of all lands embracing such technological progress (leaves none untouched though), need to be made more stringent, detectable, with outcomes for prevention. A new brand of Cyber-cops will need to be constituted – who have in-depth knowledge and technical capabilities (rather extensively trained) to

  Comprehend the types of crimes that can be committed
    Analytical skills to trace the equipment(s) used for the crime
    Understand the device characteristics with potential vulnerable points
    Analyze the data getting generated through millions of devices
    Profile the device types used in the crime
    Understand data privacy laws and detect the extent of damage
    Complete understanding of compliance laws of several vertical industries (like BFSI)
    Most of the categorized IoT devices used in solutions
    And many more

What I am indicating is that Cyber police can no longer be a selective location based optimized teams in a police station, but proper networked teams who have extensive tech knowledge of the field. They must be equipped with applications and mechanisms to establish crime patterns and behavioral trends of typical class of the crime being committed (periodically?). These can also be virtual teams which can work on distributed patterns but build a virtual cyber security data center – with enough potentials and credibility to nip the crime in the bud bringing speed and effectiveness into the crime scene.

While preparing for this so-called 3rd Industrial Revolution, the policy makers must get into following actions as part of readiness:

Defining and designing cyber threat intelligence (CTI)
Defining Cyber security ecosystem including suppliers, partners, vendors, business networks
Cyber cells must be formed at each department of the citizen service to create preventive mechanisms for tracking cyber-crimes, and intervention at greater speeds
Creating a level of understanding among the organizations for strong governance, controls and accountability
Enlisting high valued assets(buildings, transports, Physical data centers among many) and provisioning for their safety against such attacks
Using forensic analytics continuously to understand the cyber threat sources and their patterns through threat intelligence data
Policies to monitor all financial transactions through the mobile devices for understanding modus operandi

Cyber Security can no longer be tagged only to IT engineers in this digital era, especially where engineering organizations are embracing it in a big way. With th amalgamation of engineers from various branches to form the IoT teams, it has to be a collaborative effort to create ward-offs by both the core engineers as well as the IT engineers. Every solution must be scrutinized for a security threat and provisioning of the same- as part of each IoT solution. Penetration testing techniques would need more sophistication to weed out holes and at a much better pace.

There must be security norms laid out and each customer at all times must think and demand security wrappers around the solutions being doled out. …hate to say this but CYBER SECURITY CAN BECOME A NIGHTMARE if not taken care of!!

IoT Security is everybody’s business!! – Part 1

With the Digital wave, the structure of the IT organizations, especially those racing to embrace new technologies and IoT is poised for a paradigm shift. Every brilliant side of technological revolution comes with a darker patch as well. With so much of data slated to being generated via connected devices, the Cyber Security can no longer be the forte of IT folks ONLY.

While technology brings in convenience, it also comes at a cost (read flip side).

In the recent past in India, we have started seeing mobile wallets increasingly being used for payments and other financial transactions to another device or account. The connected wallets also create opportunities for hackers to break in and creatively lay their hands on the information pertaining to transactions, account details, the payee details, their numbers, the payment patterns, sources of funds, and many such confidential data which one would not like to divulge.

Cyber security, will don a new hat with the advent of new technology and devices working in tandem. Trying to stop break-ins will need a lot more intelligence and smart techniques to be devised. The provisioning of security to these mushrooming applications and connected devises will need to be really understood well so that people know they are secure while transacting with gates to personal data. The approach itself requires comprehensive techniques.

The mobile channels will provision more incentives with increase in volumes of both devices and transactions. The global reach of the mobiles have opened standard techniques for the hackers across the global hacking communities. Ubiquity and connectivity are vulnerable and enables folks to get to mobile devices. The incentives are more for mobiles which use financial transactions, undoubtedly. It may not be hard for hackers to know which user uses which number to carry out financial transactions.

The richer the features of the mobile, the more it becomes a target for the hackers.  The concern about the privacy invasion by advertisers is rising steeply with these smarter devices. In 2010-11 Wall Street conducted a test for 101 Android/iOS applications and found that more than half sent device information, 47 shared location data, and 5% users –  personal information to advertisers without the consent of the users.

More than 1000 malware target mobile devices globally. An instance of worm attack can infect mobiles rapidly to the tune of millions of handsets.  As mobiles are getting more advanced so are the worms accomplishing more sophistication – raising their quality of attack as well.  As technology carriers are improving the device capability, the blue-tooth and Wi-Fi is also becoming airborne contaminators. Some viruses dial international numbers while the subscriber is sleeping.

The mobile computing increases the data loss as well. With the connected devices expected to transmit data across applications and other devices, the hackers would try means and ways to create opportunities in the chaos. Mobile banking has also brought in rogue applications which are smartly working their way to gather financial information from devices through even legitimate applications topped with these malware at app stores.

Over all this, it is said that more than 37% of the service providers do not have any threat intelligence programs.

Impacting Scenarios

As hackers take control of the connected devices, the very capability for which the IoT was brought in (efficiency, productivity, ease, etc) will be compromised.  It is scary to even think what if the folks are unable to stop machines, controlled by connected devices for convenience- large ones at that. IT security itself will not stand ground here.  The extended knowledge across applied industrial controls and production processes would become mandatory to put the checks and balances in place. (What if one is not able to stop a blast furnace in steel plants?…)

Water Management:  Anything which is scarce and essential comes under the cloud of threat and catches attention for disruptive opportunities. Water management through connected devices is becoming a lucrative offering from many vendors ensuring appropriate water quality, controlled water supply, water treatment, metering and other features. Water consumption, like electricity is also vulnerable where automatic vaults and control mechanisms for pressure and flow are devised to be controlled through technology. A loss of control would create wastage of water across and lead to a water crisis.

Patients Health Records (PHR)

The PHRs of patients are too personal a data to be privy to. These personal health records reveal several confidential parameters of personal health profile of an individual with historic ailments, health issues in the recent past, blood group info, and many more data which can lead to people either playing with or destroying the data for obvious reasons or holding the same for ransom. Very dangerous but true, not because we need to be scared, but the awareness of such a threat is missing till the first casualty occurs.

The Nuclear plants, used for positive reasons, like generating power can be a huge source of risk – if they were to lose hold over the control process of nuclear reactors.  If IoT based controllers were deployed in these plants for the purpose of analytics and other accompanying research advantages, there should be exhaustive sets of checks and audits built in – plus multiple approvals at multiple governance decision points to ensure disasters would be at least minimized.

Likewise, hacking connected or smart cars can lead to road disasters.  This includes the hacking of smart traffic management – feature of smart cities. Insurance transactions can be blocked and claims disabled or diverted, where insurance segments are moving from statistics to individual fact-based policies.

Cloud is another source of vulnerability. The plethora of data being stored on cloud will require tighter secured solutions, and hence the cloud data security will only become more crucial.

It is said that M2M communications will themselves generate about $900 billion in revenues by 2020.

Dependency on the connected devices for various aspects of the futuristic work-style like improved real-time decision making, better design of solutions, reliability on the so-generated data analytics (what about data quality?), driving future product conceptualization, fleet management,  and many others could be a challenge if the systems malfunction due to malware or cyber-attacks.

The above are potential scenarios where the flip side of technology, if misused, can create disasters and can cause unimaginable disruption. However, it is not too late to create a strategic security blueprint and get the awareness levels in the public embracing these newer emerging solutions in future.

We will discuss the potential next steps on what we should do, what the state agencies should do and what the general users should know in the sequel to this blog shortly. Till then happy reading….


              The Focus in transition is a key success factor and if you lose the focus, lots of things may fall off which would be impossible to re-gather to move on. While the statement is simple, the act of holding things together really needs multi-tasking, intense planning and precise execution. If not done meticulously, it can be a huge challenge, if not a nightmare! The following areas of focus need to be kept in mind:

    Change Management
    Client Management
    Risk Management
    Communication Management
    Quality Management
    Issue Management
    Scope Management
    Schedule Management
    Resource Management
    Security Management
    Transition Program Management
    Cutover Management

Apart from that, on the bolt-ons, we have the following areas to look into:

Vendor Management
    Transition Planning
    Pursuit Handover
    Checklist usage and Tools Management

Change Management is the key ingredient of any transition and all other topics get covered under this one umbrella if we were to put across our experiences. Afterall it is a game of Management of Change (MoC)!

Change being inevitable starts with change in the team and the team members who walk in to support start bringing in changes which come in trickles initially and then the trickle grows. The changes could be people, processes, or technology. However it assumes larger proportion when the Transformation activities are chipped into the plan and then the landscape itself is prone to change due to business needs and compulsions.

Client Management is another area which is very critical and due to collaborative nature of the MoC, every service provider must take the customer along in the journey. Having said that, client actually should be the source of change requests and unless the client plays the game with the objectives clearly defined, the outcomes can be un-satisfactory. However, that little nudge as technology service advisors sometimes becomes mandated to push in the right direction.

Risk – no change and no gain comes without risks and Transition is no exception. Risk Management is key since there are bits and pieces tending to fall all over during this journey as turbulent it may be. The risks can be from availability of teams to platforms to vendor behaviours, etc, etc. It is hence very critical to upfront start dealing with risks through identification by understanding the customer contours. This is where the experience of the transition managers counts. If a person playing the role of transition manager has seen this earlier, he can smell the risks much ahead in the game . However, make no mistakes as all risks are not defined or cannot be. Risks can come in any shape or form or time and hence transition managers need to tread carefully to build sufficient mechanisms to mitigate them upfront. The booby traps during transitions in the form of risks are difficult to gauge upfront – all the time.

Communication is a very critical weapon in the transition kit. If you don’t build/define the communication with stakeholders, the transparency suffers and this is where the client can become most apprehensive. An appropriate communication plan should be included in the Integrated Transition Plan document and relevant stakeholders, mode of governance & communication should be defined.

Quality is driven by mutually defining the set of structured processes for the engagement between client and us. All metrics must be unambiguously defined and reflected in the reports. Afterall, what gets measured gets done!

Issue Management should commence the instant you start identifying the risks for the engagement and this happens much earlier in the pre-sales cycle when you are assessing the landscape to takeover. Risks, not mitigated would eventually become issues which need resolutions and hence being proactive and diligent is critical to optimize the level of issues during the transitions.

Scope is very critical and unless we use base-lining techniques, it will only add to the turbulence during the journey. There could be instances where the elements of transition will change, the number of devices or number of applications, etc but we need to work on lead times. Many times, client would keep changing the scope, be it applications, devices, window of services, L1/L2/L3 definitions as per his perspective, etc – but may still expect the deadlines not to be shifted. This is a real challenge and hence you have to keep impressing the risks of doing a quick and dirty job in view of these changes. If the client confirms the risk appetite, it becomes easier to take up the risks. Hence a risk profiling of the scenario must be documented and submitted as a formal report or deliverable as part of the Change Management Process.

Related to the above, are Schedule and Resource management aspects. The Changes above would have a direct impact on the schedule, deadlines, resource needs in quantity and quality. I have seen the in-flight changes to scope and that, if not handled deftly and diplomatically, can turn into a relationship disaster. Your focus should be on transparency so that you find a natural sympathizer from the client organization (hence you should insist on identification of a Client Transition SPOC). Mobilizing resources is another challenge, even in large organizations. Hence please expect hard negotiations on lead times.

Security Management is more an item for set up and steady state but the seeds are sown during the transition and hence is highly inflammable if the team members don’t understand the impact of NDAs and data privacy. It becomes one of the activity in the transition plan to have a 30 min briefing by the program manager to all team members on the contractual obligations on security and its breaches – more so, the consequences.

It is highly advised to have a TPMO – the Transition Program Management Office – commissioned as part of the start-up. With so many things flying around, in transit, in change, dynamically changing around, one should have a single stop shop to manage that and that Management of Change office is the TPMO where such things are noted, notified, called out, actioned, resolved – driving all towards the destination milestone!! Many times this is given a miss and this is when you cut corners, especially for engagements greater than 40+ FTEs, you will feel the heat in the course of the engagement.

When ready to take over, your set of clients for the commencement of such a change in services, may, also change. They could be the end customers for your clients or your clients themselves. Whoever be the stakeholders, all must be notified of the upcoming changes in the services, from when, what will change (call-in numbers, especially), who will be responsible, improvements if any, change in processes -if any, etc. Hence a the TPMO must establish the Transition Cutover Command Center (TC3) for communication in advance so that service disruption due to non-awareness is vastly optimized.

There are other areas that come into play during this Management of Change:

We could be in a situation where we need to manage the Vendors on behalf of the client. If there are many such vendors where contract novation happens, it is ideal to set up a multi-vendor council (MVC) as a general practice.
An Integrated Transition Plan is a blueprint with all planning aspects addressed comprehensively including the mpp for schedule of transition, RACI, etc., that becomes a rulebook establishing who will do what and when. This is another thing that should be used as a deliverable to the client and sign-off obtained.
When the focus of activity that passes from sales to transition, especially if the transition manager is not involved from a pre-sales stage, may things can drop hence creating a gap between what is committed and what gets delivered. Hence there should be a window for a proper Pursuit Handover activity to the transition manager. What gets handed over to the transition manager is the expectations sold to the client.
Checklist usage and Tools Management: non-establishment of a transition kit upfront can lead to scampering in between for proper checklists and tools. Usage of many tools, yet struggling to deliver a proper clean report is observed as outcomes of poor planning and casual approach to transition. This can become a nightmare as without appropriate tools, you cannot control the drive to destination.

Net-net, planning and continuous monitoring is key to any transition and Transition manager who is not entrenched into the details, would create a difficult journey for himself and the team with severe impact to the QoS!

Digital Workforce: Next Gen Engineers Asset

        The IT industry service providers are right now struggling with means and mechanisms to transform the existing workforce to adopt and adapt the Digital skills. As they keep stepping deeper and deeper, the journey seems to be getting more difficult and complex. The lateral folks resting on their laurels for long are finding it difficult to put their arms around the new technology and software engineering changes demanded as the industry as a whole seems to be suffering from inertia, built over more than a decade.
The technological advances in the past 2-3 years have been going at a phenomenal pace. The platforms, packages, penetration of Social Media, Mobile apps, transformation to Cloud, Analytics being used as a primary R&D tool for almost all domains, and latest being the IoT – all have brought in compulsive factors in each of the industrial domain. It now looks like no industry will survive without embracing technology.

Many of the technologies/platforms that we hear today in the IT industry never existed 8-10 years ago like Raspberry Pi, Xively, Thingworx, Mahout, Apache KafKa, IBM Bluemix, Osmosis, etc – and to add further to the pace, what we see today maybe just the 40% of what we will see in the next 5 years!!  The bright minds would be needed in every organization to drive the adoption and delivery of solutions using these technologies.

The Next wave of engineers who will come out by 2017-18  hold the key. When I speak to them on the transformations and new developments today, they seem to understand most of the emerging areas, thinking like professionals who are ready to learn, execute and conquer the new technological frontiers beckoning them. Many with right support of the campuses are ideating like never before. Many are taking on the mantle of becoming entrepreneurs and donning a techno-commercial hat. They are able to talk, like the typical maverick innovative thinkers. Though many would think that’s not what we want, I would contest that this is what is needed now. If we cannot think out of the box, the conventional approach will spell a disaster.

The IT Organizations (especially those in service industry) are running aggressive internal transformation programs, some in a focused and some on discretionary ways, but the attention and absorption being quite low, the grip on the handle is a suspect. Hence the infusion of the new blood to mix and rejuvenate the read-to-learn experienced folks will create the new organizations which will sustain the next five years, if not the decade.

The young engineering students pursuing technology to graduate in 2017-18 will have bigger challenge to close the gaps between what was taught in earlier part of the curriculum and what is being rolled out in the current curriculum. The following will come true in the next few years:

There will be unprecedented collaboration between industries and academia to create unique products on mass scale. Both will come together to create a more vibrant workforce for facing the upcoming market competition and demands.
Project works or internships may start assuming more significance as IoT areas would require more hands on than being limited to a theoretical exercise. Industries would demand longer duration of projects/internships. It would extend from 6-12 months than the current 3-4 months. The top students would get paid heavily by the Indian outfits.
More internal labs and incubation centers would find places alongside customer CoEs, co-created between service providers, academia, product vendors and customers. All would focus to create innovative market disruptors and hence may unleash a fierce but healthy competition between the internal lines of business. Perhaps a mini Technology office within each delivery unit will be a need for the next 4-5 years.

With the above, more patents are expected to be created and the IP creation will become a buzzword to swear by, more aggressively.

Cloud, Mobility and Analytics will no longer be niche areas and every IT professional has to understand about few of these areas to decent levels of depth. Hence each delivery unit will need to have architects in these areas embedded into their organization.

With this being the futuristic scenario, the existing workforce will have quite a bit to bite and chew. The organizations struggling to wriggle out of the historical structures (especially ones where personality based organization structures have been a trend) would need to be dismantled. Every organization would need to re-incarnate themselves with a heavy focus on the next generation engineers playing a heavy role in the transformation. The quality of engineers will be focus and the pay packs are slated to surge up. Hence the intake may be limited to those who can walk the talk. Continue reading “Digital Workforce: Next Gen Engineers Asset”

Software Development Methodologies

A software development methodology or system development methodology in software engineering is a framework that is used to structure, plan, and control the process of developing an information system.

There are the following methodologies:

Agile Software Development
    Crystal Methods
    Dynamic Systems Development Model (DSDM)
    Extreme Programming (XP)
    Feature Driven Development (FDD)
    Joint Application Development (JAD)
    Lean Development (LD)
    Rapid Application Development (RAD)
    Rational Unified Process (RUP)
    Systems Development Life Cycle (SDLC)
    Waterfall (a.k.a. Traditional)

Agile Software Development Methodology

Agile software development is a conceptual framework for undertaking software engineering projects. There are a number of agile software development methodologies e.g. Crystal Methods, Dynamic Systems Development Model (DSDM), and Scrum.

Most agile methods attempt to minimize risk by developing software in short time boxes, called iterations, which typically last one to four weeks. Each iteration is like a miniature software project of its own, and includes all the tasks necessary to release the mini-increment of new functionality: planning, requirements analysis, design, coding, testing, and documentation. While iteration may not add enough functionality to warrant releasing the product, an agile software project intends to be capable of releasing new software at the end of every iteration. At the end of each iteration, the team reevaluates project priorities.

Agile methods emphasize realtime communication, preferably face-to-face, over written documents. Most agile teams are located in a bullpen and include all the people necessary to finish the software. At a minimum, this includes programmers and the people who define the product such as product managers, business analysts, or actual customers. The bullpen may also include testers, interface designers, technical writers, and management .

Agile methods also emphasize working software as the primary measure of progress. Combined with the preference for face-to-face communication, agile methods produce very little written documentation relative to other methods. Continue reading “Software Development Methodologies”

How to remove malware from your Windows PC

Is your computer running slower than usual? How to remove malware from your Windows PCAre you getting lots of pop-ups? Have you seen other weird problems crop up? If so, your PC might be infected with a virus, spyware, or other malware—even if you have an antivirus program installed. Though other problems such as hardware issues can produce similarly annoying symptoms, it’s best to check for malware if your PC is acting up and we’ll show you how to do it yourself.

Step 1: Enter Safe Mode

Before you do anything, you need to disconnect your PC from the internet, and don’t use it until you’re ready to clean your PC. This can help prevent the malware from spreading and/or leaking your private data.

If you think your PC may have a malware infection, boot your PC into Microsoft’s Safe Mode. In this mode, only the minimum required programs and services are loaded. If any malware is set to load automatically when Windows starts, entering in this mode may prevent it from doing so. This is important because it allows the files to be removed easier since they’re not actually running or active.

Sadly, Microsoft has turned the process of booting into safe mode from a relatively easy process in Windows 7 and Windows 8 to one that is decidedly more complicated in Windows 10. To boot into Windows Safe Mode, first click the Start Button in Windows 10 and select the Power button as if you were going to reboot, but don’t click anything. Next hold down the Shift key and click Reboot. When the full-screen menu appears, select Troubleshooting, then Advanced Options, then Startup Settings. On the next window click the Restart button and wait for the next screen to appear (just stick with us here, we know this is long). Next you will see a menu with numbered startup options; select number 4, which is Safe Mode. Note that if you want to connect to any online scanners you’ll need to select option 5, which is Safe Mode with Networking.

You may find that your PC runs noticeably faster in Safe Mode. This could be a sign that your system has a malware infection, or it could mean that you have a lot of legitimate programs that normally start up alongside Windows. If your PC is outfitted with a solid state drive it’s probably fast either way.

Step 2: Delete temporary files

Temp files You can use Windows 10’s built-in disk cleanup utility to rid your system of unnecessary temp files. tempfiles

Now that you’re in Safe Mode, you’ll want to run a virus scan. But before you do that, delete your temporary files. Doing this may speed up the virus scanning, free up disk space, and even get rid of some malware. To use the Disk Cleanup utility included with Windows 10 just type Disk Cleanup in the search bar or after pressing the Start button and select the tool that appears named Disk Cleanup.

Step 3: Download malware scanners

Now you’re ready to have a malware scanner do its work—and fortunately, running a scanner is enough to remove most standard infections. If you already had an antivirus program active on your computer, you should use a different scanner for this malware check, since your current antivirus software may not have detected the malware. Remember, no antivirus program can detect 100 percent of the millions of malware types and variants.

There are two types of antivirus programs. You’re probably more familiar with real-time antivirus programs, which run in the background and constantly watch for malware. Another option is an on-demand scanner, which searches for malware infections when you open the program manually and run a scan. You should have only one real-time antivirus program installed at a time, but you can have many on-demand scanners installed to run scans with multiple programs, thereby ensuring that if one program misses something a different one might find it.

If you think your PC is infected, we recommend using an on-demand scanner first and then following up with a full scan by your real-time antivirus program. Among the free (and high-quality) on-demand scanners available are BitDefender Free Edition, Kaspersky Virus Removal Tool, Malwarebytes, Microsoft’s Malicious Software Removal Tool, Avast, and SuperAntiSpyware.

Step 4: Run a scan with Malwarebytes

For illustrative purposes, we’ll describe how to use the Malwarebytes on-demand scanner. To get started, download it. If you disconnected from the internet for safety reasons when you first suspected that you might be infected, reconnect to it so you can download, install, and update Malwarebytes; then disconnect from the internet again before you start the actual scanning. If you can’t access the internet or you can’t download Malwarebytes on the infected computer, download it on another computer, save it to a USB flash drive, and take the flash drive to the infected computer.

After downloading Malwarebytes, run the setup file and follow the wizard to install the program. Once the program opens, keep the default scan option (“Threat Scan”) selected and click the Start Scan button. It should check for updates before it runs the scan, so just make sure that happens before you proceed.

Though it offers a custom-scan option, Malwarebytes recommends that you perform the threat scan first, as that scan usually finds all of the infections anyway. Depending on your computer, the quick scan can take anywhere from 5 to 20 minutes, whereas a custom scan might take 30 to 60 minutes or more. While Malwarebytes is scanning, you can see how many files or objects the software has already scanned, and how many of those files it has identified either as being malware or as being infected by malware.

If Malwarebytes automatically disappears after it begins scanning and won’t reopen, you probably have a rootkit or other deep infection that automatically kills scanners to prevent them from removing it. Though you can try some tricks to get around this malicious technique, you might be better off reinstalling Windows after backing up your files (as discussed later), in view of the time and effort you may have to expend to beat the malware.

Once the scan is complete Malwarebytes will show you the results. If the software gives your system a clean bill of health but you still think that your system has acquired some malware, consider running a custom scan with Malwarebytes and trying the other scanners mentioned earlier. If Malwarebytes does find infections, it’ll show you what they are when the scan is complete. Click the Remove Selected button in the lower left to get rid of the specified infections. Malwarebytes may also prompt you to restart your PC in order to complete the removal process, which you should do.

If your problems persist after you’ve run the threat scan and it has found and removed unwanted files, consider running a full scan with Malwarebytes and the other scanners mentioned earlier. If the malware appears to be gone, run a full scan with your real-time antivirus program to confirm that result.
Step 5: Fix your web browser

Malware infections can damage Windows system files and other settings. One common malware trait is to modify your web browser’s homepage to reinfect the PC, display advertisements, prevent browsing, and generally annoy you.

Before launching your web browser, check your homepage and connection settings. For Internet Explorer right-click the Windows 10 Start button and select Control Panel, then Internet Options. Find the Home Page settings in the General tab, and verify that it’s not some site you know nothing about. For Chrome, Firefox, or Edge, simply go to the setttings window of your browser to check your homepage setting.

IE Home Page Settings

Step 6: Recover your files if Windows is corrupt

If you can’t seem to remove the malware or if Windows isn’t working properly, you may have to reinstall Windows. But before wiping your hard drive, copy all of your files to an external USB or flash drive. If you check your email with a client program (such as Outlook or Windows Mail), make sure that you export your settings and messages to save them. You should also back up your device drivers with a utility such as Double Driver, in case you don’t have the driver discs anymore or don’t want to download them all again. Remember, you can’t save installed programs. Instead, you’ll have to reinstall the programs from discs or redownload them.

If Windows won’t start or work well enough to permit you to back up your files, you may create and use a Live CD, such as Hiren’s BootCD (HBCD), to access your files.

Once you have backed up everything, reinstall Windows either from the disc that came with your PC, by downloading the installation image from Microsoft, or by using your PC’s factory restore option, if it has one. For a factory restore you typically must press a certain key on the keyboard during the boot process in order for restore procedure to initialize, and your PC should tell you what key to press in the first few seconds after you turn it on. It there’s no on-screen instructions consult your manual, the manufacturer, or Google.
Keeping your PC clean

Always make sure that you have a real-time antivirus program running on your PC, and make sure this program is always up-to-date. If you don’t want to spend money on yearly subscriptions, you can choose one of the many free programs that provide adequate protection, such as Avast, AVG, Panda, or Comodo. You can read more about how to find the best antivirus program for your needs right here.

In addition to installing traditional antivirus software, you might consider using the free OpenDNS service to help block dangerous sites. And if you frequent shady sites that might infect your PC with malware, consider running your web browser in sandbox mode to prevent any downloaded malware from harming your system. Some antivirus programs, such as Comodo, offer sandboxing features, or you can obtain them through a free third-party program such as Sandboxie.

When you think that you’ve rid your PC of malware infections, double-check your online accounts, including those for your bank, email, and social networking sites. Look for suspicious activity and change your passwords—because some malware can capture your passwords.

If you have a backup system in place that automatically backs up your files or system, consider running virus scans on the backups to confirm that they didn’t inadvertently save infections. If virus scans aren’t feasible, as is the case with online systems since they usually will only scan a drive attached to your PC or just the C: drive, consider deleting your old backups and resetting the software to begin saving new backups that are hopefully free from infections.

Keep Windows, other Microsoft software, and Adobe products up-to-date. Make sure that you have Windows Update turned on and enabled to download and install updates automatically. If you’re not comfortable with this, set Windows to download the updates but let you choose when to install them.