Computer CPU Knowledge

A Computer CPU is, in simple terms, the “brain” of your computer. It is also known as the processor. CPU’s are used to process everything from basic to complex functions. Every time something needs to be computed it gets sent to the CPU. It attaches directly to the motherboard using a socket and is usually cooled by a heat sink or fan. Depending on the manufacturer of the computer processor, the socket types will be different.

Make sure that your CPU has the correct socket type for your motherboard. Not all CPU’s have pins on the bottom side, but be careful with ones that do. The pins can be easily bent while attaching the CPU to the motherboard. Processors have drastically advanced over the years from the Pentium 4 Processor, to the Core 2 Duo, and now to the Quad Core Processor.

There are several different manufacturers of CPU’s such as Intel and Athlon. Each manufacturer has many versions of their processors, differing in specifications. To identify one CPU from another, each version is given a core name. Taking Intel for example, a couple of the different cores might be: Core 2 Duo or Core 2 Quad. Each CPU has a clock speed, which refers to the speed that the CPU runs. This is the most important indicator of a CPU’s performance level. Another very important performance factor of a CPU is the FSB. The FSB is the data transfer speed between the CPU and the RAM. A CPU also has an L2 Cache speed. Level-2 cache is an area of fast memory inside the CPU. This memory is used to store more frequently used data so it will not have to be received from slower RAM. The larger the L2 Cache, the faster your processor will be. The technological advancement of processors has made them more efficient in many ways.

How do You Register/Obtain DLL or OCX Files?

Most programs use some form of library files to hold common routines used by multiple parts of the program. These files typically have the extension .DLL or .OCX and are distributed with programs that need them. Rarely, one needs to be re-registered with Windows.

When a program installs a library (DLL or OCX) file the program’s install routine will typically “register” the file with the system. This process tells the system the libraries in the file are available for more than one program to use. (Some DLL or OCX files are self-registering.)

Sometimes, if multiple programs are using a DLL or OCX file the system does not know about all of them. In this case, if you uninstall one of the programs its uninstall routine may delete the library in question not knowing that another program needs it. When this is done the library’s registration with the system no longer applies. And, if you just copy the DLL or OCX file back where it came from the system may not recognize it even if it’s in the proper place. While this is rare, when this happens you may need to “register” the library file manually.

You can find the full details about how to use the REGSVR32.EXE file at this Microsoft link…

http://support.microsoft.com/?kbid=249873

The process is non-trivial so you should study the referenced page quite closely if you are going to attempt to register a library file. Indeed, it just might be easier to reinstall the application in question and let its installer take care of the regsitration process as part of the install.

In summary, if you choose to manually register a library file you will have to restart your computer in command prompt mode (called DOS mode in some cases). Once there, you will have to issue a command of the form…

Regsvr32 [/u] [/n] [/i[:cmdline]] dllname

It’s possible this procedure may fail in which case you may need information from the developer of the library.

I guess the bottom line recommendation from Computer Knowledge would be to avoid this process if at all possible.

Where Do You Get DLL Files?

As mentioned above, all of the library files you need should have been provided by the programs that require them. In the rare instance that you need one and can’t find it on the Website of the program in question you might try…

from:http://www.cknow.com/cms

Strengthen your network defenses with these four steps

What are network defenses?
At first, the subject of network defenses might seem redundant or very general. However, there’s nothing redundant or general about this area. Network defenses address the issues involved in connecting networks to each other and in operating a network as a whole. Network defenses don’t address things such as external firewalls or dial up connections, since the perimeter security layer covers these. Nor do network defenses cover individual servers and workstations, since the host-defenses layer covers these. Instead, network defenses cover things like protocols and routers.

Internal firewalls
Just because the subject of network defenses doesn’t cover external firewalls, it doesn’t mean that it doesn’t cover firewalls at all. One of the first steps that I recommend taking toward securing your network defenses is to enable internal firewalls where possible. Internal firewalls are basically the same as external firewalls. The main difference is that their primary job is to protect the machine against traffic that is already on your network. There are a couple of reasons for implementing internal firewalls.

First, imagine for a moment that a hacker or a virus was able to manipulate your external firewall in a way that allowed all varieties of traffic to flow through it. Normally, this would mean that it was open season against your network. However, if you had enabled internal firewalls, the internal firewalls would block the malicious packets that the external firewall had let slip through.

The other main reason for enabling some internal firewalls is that many attacks tend to be internal in nature. At first, you might hear this statement and think that an internal attack couldn’t possibly happen on your network, but I’ve seen internal attacks and other security breaches in every company that I’ve ever worked for.

At two of the places that I used to work, people in other departments who were hacker or administrator wannabes thought that it would be cool to probe the network to see how much information they could acquire. In both cases, they had no ill intent (or so they said), they were just looking to impress their friends by hacking the system. Whatever their motivation, they did attempt to break through the network’s security. You’ve got to protect your network from people like this.

In other places that I’ve worked, I’ve seen people bring in unauthorized software that was infected with Trojan horses (remember “Back Orifice”?). These Trojan horses would then broadcast on specific ports. The firewall was powerless to stop malicious packets from entering the network because the packets were already on the network.

This actually brings up an interesting point: Most of the techs I know configure their external firewalls to block all but a few inbound ports and to allow all outbound traffic. I recommend being just as picky with the outbound ports as you are with the inbound ports because you never know when a Trojan horse could be using some obscure port to broadcast information about your network to the world.

Internal firewalls ideally should be placed on each PC and on each server. There are several good personal firewall products on the market, such as Norton’s Personal Firewall 2003 from Symantec. However, you may not have to spend a dime on an internal firewall for your workstations as Windows XP contains its own built in personal firewall.

To enable the Windows XP firewall, right-click on My Network Places and select the Properties command from the resulting shortcut menu to display the Network Connections window. Next, right-click on the network connection that you want to protect and select Properties. Now, select the Advanced tab and then click on the check box in the Internet Connection Firewall section. There’s also a Settings button that you can click to enable any ports that should remain open. Although the Windows XP firewall is intended as an Internet firewall, it works great as an internal firewall as well.

Encryption
The next step that I recommend taking is to encrypt your network traffic. Begin by implementing IPSec wherever possible. However, there are a few things that you need to know about implementing IPSec security.

When you configure a machine to use IPSec, you have the option of configuring IPSec to either request encryption or to require encryption. If you configure IPSec to require encryption, then any machine that the machine attempts to connect to will be informed that encryption is required. If the other machine is capable of IPSec encryption, then a secure channel will be established and the communications session will begin. If, on the other hand, the other machine is incapable of IPSec encryption, then the communications session will be denied because the required encryption can’t occur.

The request encryption option works a little differently. When a machine requests a connection, it also requests encryption. If both machines support IPSec encryption, then a secure channel is established and communications begin. If one of the machines doesn’t support IPSec encryption, then the communications session is established anyway, but the data simply isn’t encrypted.

For this reason, there are a couple of things that I recommend doing. First, I recommend placing all of the servers within a site on a secure network. This network should be completely isolated from the normal network. Each server that users require access to should have two network cards, one for connecting to the main network and the other for connecting to the private server network. The server network should consist of only servers and should have a dedicated hub or switch.

By implementing such a configuration, you create a dedicated backbone between the servers. All server-based traffic, such as RPC traffic and traffic used for replication, can flow across this dedicated backbone. By doing so, you’ve helped to secure the server-based traffic and you’ve increased the amount of available bandwidth on the main network.

Next, I recommend implementing IPSec. For the server-only network, IPSec should be configured to require encryption. After all, this network consists of nothing but servers, so unless you’ve got UNIX, Linux, Macintosh, or some other non-Microsoft server, there’s no reason why all of your servers shouldn’t support IPSec. Therefore, you’re perfectly safe requiring encryption.

Now, for all of the workstations and the server connections on the primary network, you should configure the machines to request encryption. By doing so, you’ve achieved the optimal balance between security and functionality.

Unfortunately, IPSec can’t distinguish between network adapters on multihomed computers. Therefore, unless a server is attached exclusively to the server network, you’ll want to use the request encryption option or else clients may not be able to access the server.

Of course IPSec isn’t the only type of encryption available for your network traffic. You must also consider how you’ll secure traffic that flows through your perimeter and the traffic flowing across your wireless networks.

Wireless encryption tends to be a touchy subject these days because the wireless networking devices are still evolving. A lot of administrators view wireless networks as inherently insecure because of the fact that network packets are flying through the air and anyone with a laptop and a wireless NIC card can intercept those packets.

While there are certainly risks associated with wireless networks, in some ways, wireless networks are even more secure than wired networks. The reason is that the primary mechanism for encrypting wireless traffic is WEP encryption. WEP encryption ranges in strength from 40 bit on up to 152 bit or even higher. The actual strength depends on the lowest common denominator. For example, if your access point supports 128-bit WEP encryption, but one of your wireless clients only supports 64-bit WEP encryption, then you’ll be limited to using 64-bit encryption. These days, however, just about all wireless devices support at least 128-bit WEP encryption.

What many administrators fail to realize is that just because wireless networks use WEP encryption, it isn’t the only encryption type that they can use. WEP encryption simply encrypts whatever traffic is flowing across the network. It doesn’t care what type of traffic it is encrypting. Therefore, if you are already encrypting data with IPSec, as you should be, then WEP will simply provide a second level of encryption to the already encrypted data.

Network isolation
If your company is very big, then there’s a good chance that you have a Web server that hosts the company’s Web site. If this Web server doesn’t require access to a backend database or to other resources on your private network, then there’s no reason to place it on your private network. Why run the risk of someone using a Web server as an entry point to your private network when you can fix the problem by isolating the server into its own network?

If your Web server does require access to a database or to some other resource on your private network, then I recommend placing an ISA Server between your firewall and the Web server. Internet users will communicate with the ISA Server rather than with the Web server directly. ISA Server will proxy requests between the users and the Web server. You may then establish an IPSec connection between the Web server and the database server and an SSL connection between the Web server and the ISA Server.

Packet sniffers
After you have taken the necessary steps to secure the traffic flowing across your network, I recommend occasionally using a packet sniffer to monitor network traffic. This is just a precautionary step because it allows you to see what types of traffic are actually present. If you detect unexpected packet types, you can see where those packets are coming from.

The biggest problem with protocol analyzers is that they can be used as a hacker tool. I used to think that it was impossible to detect someone that was using a packet sniffer on my network because of the nature of packet sniffing. Packet sniffers simply watch traffic flowing across the wire and report the contents of each packet. Since packet sniffers don’t transmit packets, how could you possibly detect them?

It’s actually easier than you might think to detect packet sniffing. All you need is a bait machine. The bait machine should be a workstation that no one knows exists except for you. Make sure that the bait machine has an IP address, but is not a part of a domain. Now, place the bait machine on the network and generate some packets. If someone is sniffing the network, the sniffer will pick up the packets that the bait machine produces. The problem is that the sniffer will know the machine’s IP address, but not its host name. Usually, the sniffer will do a DNS lookup to try to determine the machine’s host name. Since you are the only one who knows about the machine, no one should be doing DNS lookups on the machine. Therefore, if you check the DNS logs and see that someone has been doing DNS lookups on your bait machine, then there’s a good chance that the detected machine is sniffing the network.

Another step that you can take toward preventing sniffing is to replace any existing hubs with VLAN switches. The idea is that these switches create virtual networks between the sender and the recipient of a packet. No longer does the packet flow to every machine on the network. Instead it flows directly to its destination. This means that it would be difficult for someone who might be sniffing the network to get anything useful.

These types of switches have another benefit as well. With a standard hub, all of the nodes fall into a single collision domain. This means that if you have 100 Mbps of total bandwidth, then the bandwidth is divided among all of the nodes. However, with a VLAN switch, each virtual LAN has a dedicated amount of bandwidth that it doesn’t have to share. That means that a 100 Mbps switch could potentially handle many hundreds of Mbps at a time, all on different virtual networks. Implementing VLAN switches will improve both security and efficiency.

10 tips for troubleshooting slowdowns in small business networks

Network congestion and slowdowns–whether caused by faulty hardware, negligent users, viruses or spyware applications gone wild, or other factors–lead to serious headaches for network administrators and support personnel. By keeping a wary eye tuned for the following 10 items, IT professionals can help prevent the most common causes of network slowdowns.

#1: Bad NICs
Intermittent network errors, particularly those isolated to a specific workstation or server, can often be traced to a failing network interface card. When you believe a network adapter may be failing, visually inspect the card’s LED link lights.

A solid green (or amber) LED indicates the NIC has a good active physical connection with another network device, such as a network switch or router (blinking LEDs typically indicate the NIC possesses an active connection and is processing network traffic). If the LED is not lit green, it’s likely the network adapter is disabled within Windows or doesn’t have an active connection to the network. It’s also possible the cable plugged into the NIC is connected to a nonfunctioning wall-jack or faulty network port.

If you can rule out nonfunctioning wall-jacks and faulty network ports (the easiest method of doing so is to connect the same network connection to a laptop known to have a properly functioning network adapter), and if the network adapter is properly enabled and configured in Windows, it’s likely the NIC is bad and requires replacement.

#2: Failing switches/routers
Many network slowdowns are foreshadowed by strange occurrences. For example, regular Web traffic may work properly, but e-mail may stop functioning. Or, regular Web traffic may work properly but attempts to connect to any secure (HTTPS) sites may fail. In other cases, Internet access simply ceases across the board.

Often the best remedy for inconsistent network outages and/or slowdowns is to reboot or power cycle the network’s routers/switches. If local network connectivity exists (if users can view and access network shares) but they are not receiving e-mail from external users or cannot access the Internet, rebooting or power cycling the WAN modem can often return the network to proper operation.

If you’re having to reboot or power cycle a piece of network equipment consistently, make sure that it’s connected to a quality uninterruptible power supply. Power fluctuations often result in confused switches and routers. If a network device is connected to a good UPS and still frequently experiences trouble, it may be necessary to replace the failing switch, router, or modem.

#3: Daisy chaining
As organizations grow, particularly small businesses, outside IT contractors often implement simple solutions. Many consultants choose to simply add a five-port router to an existing four-port router/firewall. Small businesses everywhere boast just such a setup.

However, as switches are added to a network, data packets must navigate additional hops to reach their destination. Each hop complicates network routing. Depending upon the amount of traffic a network must support–and even a small dentist’s or doctor’s office can easily stress 10/100 Mbps systems due to X-ray imagery, patient file information, and other data–the addition of an extra hop or two can spell the difference between a smooth running network and one that frequently slows employee productivity to unacceptable levels.

Resist the urge to daisy chain multiple network switches and routers. Instead, plan for capacity. Or if unforeseen growth has resulted in successive connected switches, eliminate as many devices as possible through consolidation to a more potent and scalable unit.

#4: NetBIOS conflicts
NetBIOS, still in use on many Windows NT 4.0 networks in particular, contains many built-in processes to catch and manage conflicts. Occasionally, however, those processes don’t handle conflicts properly. The result can be inaccessible file shares, increased network congestion, and even outages.

Guard against NetBIOS conflicts by ensuring older Windows systems all receive the most recent service packs. In some cases, Windows NT 4.0 systems having different service packs will generate telltale NetBT (ID 4320) errors.

Strange network behavior can also occur when two systems are given the same computer name or when two systems both believe they serve the master browser role. Sometimes the error will log itself as Event ID 8003 in a server’s system log. Disabling WINS/NetBT name resolution (only if it’s not required) can eliminate such issues.

If disabling NetBT is not an option, such errors can often be eliminated by identifying the second system that has the same computer name within the same domain and giving it a new name or by restarting the Netlogon Service on the domain controller. Yet another option for eliminating legacy NetBT issues is to search a system’s LMHOSTS file for inaccurate or outdated entries. Some IT professionals claim they’ve solved similar errors by disabling and re-enabling the NIC on the offending system.

#5: IP conflicts
Windows typically prevents two devices with the same IP address from logging on to the same network (when using DHCP). But occasionally, two systems with the same address wind up on the same network. For example, one system could receive an address automatically, while another computer logs on using a static address specified by a user. When such conflicts occur, network slowdowns result (and the systems sharing the same address frequently experience outages).

Troubleshoot IP address conflicts by ensuring you don’t have a rogue DHCP server on the network. Confirm, too, that configured DHCP scopes don’t contain overlapping or duplicate entries and that any systems (such as servers and routers) that have been assigned static IP addresses have been excluded from the DHCP pools.

#6: Excessive network-based applications
Occasionally, networks are overrun by the applications they power. For example, a physician’s office that uses a Web-based patient and practice application will commonly have every workstation logged on to the program during business hours. Retrieving data from the patient database and consistent monitoring of appointment and scheduling information alone can place stress on even a well-architected network.

Add in the fact that each workstation is likely tuned to e-mail (and many offices are turning to VoIP) and it’s easy to see how introducing a few streaming audio/video files to the mix (either in the form of online music services, news videos, or instructional medical presentations and Webinars) can unacceptably slow a 10/100 Mbps network’s performance.

Implement policies–and if necessary, hardware-based Web filtering tools–to prevent applications from overwhelming available network bandwidth. Make sure employees understand they’re not to stream unnecessary audio and video files. Further, when working with VoIP, be sure adequate data pipes are in place to manage both voice and data traffic.

#7: Spyware infestation
Spyware, the scourge of the last few years, finally appears to be meeting its match in business environments. The development of potent anti-spyware tools, combined with effective end user policies, is reducing the impact of spyware in many organizations. Windows Vista includes Defender, a decent anti-spyware application powered by the popular Giant engine.

However, infestations still occur, particularly on older systems that haven’t been properly safeguarded. Implement strong user policies and either gateway-based protection or individual client applications to prevent spyware programs from consuming precious network bandwidth.

#8: Virus infestation
Just as spyware is proving containable within business environments, so too are viruses. That said, despite an administrator’s best efforts–including firewall deployment, routine and consistent Windows patching, and the use of regularly updated antivirus programs–viruses do get through. The result can bring a network to a standstill.

For example, many viruses place Trojan programs on Windows systems, where they can wreak havoc. In addition to leveraging a system’s ability to send e-mail to forward hundreds (if not thousands) of spam messages an hour, viruses can corrupt network configuration.

Defend against virus threats to network performance by ensuring firewalls, Windows updates, and antivirus programs are properly configured and maintained.

#9: Insufficient bandwidth
Sometimes, a network just doesn’t have the throughput it requires. As with # 6–excessive network-based applications–some environments demand more bandwidth than others.

When a network does bog down, several options typically exist for increasing capacity. Besides boosting up- and downstream speeds, some offices may require additional dedicated connections. From multiple T1s to DS3s to even optical carrier-grade connectivity, many potential solutions exist.

Further, some organizations may need to upgrade existing 10/100 Mbps networks to gigabit speeds. By upgrading NICs, cabling, and devices to 10/100/1000 Mbps equipment–and replacing any remaining hubs with switches–many firms can realize significant capacity gains. In other cases, it may be necessary to subnet networks to localize particularly intense traffic to specific network segments.

#10: DNS errors
DNS configuration errors can lead to numerous network failures and generalized slow performance. When no DNS server is available on a local LAN, local systems may have trouble finding one another or accessing local resources because they’ll have trouble finding service locator records that assist Windows systems in communicating with Active Directory. Worse, systems with no local DNS server or those workstations having DNS servers several hops away may experience delays or flat outages in accessing Web sites and extranets.

Try placing DNS servers as close to network systems as possible. Although adding DNS services to existing servers places greater demand on those boxes, properly configured machines can remain secure and noticeably enhance response times to external resources.

Also, always check to ensure systems are configured to use the proper DNS servers. Network architectures change over time, yet older workstations (particularly those set to use static addressing) occasionally are forgotten and continue operating using outdated DNS settings. As your organization and ISP update DNS systems, be sure workstations and other routing equipment actually receive the updates.

Types of Computer Memory

Computer RAM:
Computer RAM is the best known form of memory your computer uses. Every file or application opened is placed in RAM. Any information the computer needs or uses becomes part of a continuous cycle where the CPU requests data from RAM, processes it and then writes new data back to RAM. This can happen millions of times a second. However, this is usually just for temporary file storage, so unless the data is saved somewhere, it is deleted when the files or applications are closed.

Hard Drive:
A Hard Drive is a form of computer memory that allows you to permanently store data. This is where all of your permanent files and programs are stored. On computers running with Microsoft windows the Hard Drive is often called C-Drive. The size of a Hard Drive is typically measured in gigabytes.

Virtual Memory:
Virtual memory typically comes into place when applications are too large for the RAM to handle. The operating System uses the hard drive to temporarily store information and take it back when needed. This is normally a lot slower than actual RAM and can possibly degrade performance if used to heavily.

Cache Memory:
Cache Memory is used in-between the CPU and the RAM and holds the most frequently used data or instructions to be processed. There are three different grades of Cache. Some systems will only have level 1 and level 2. More advanced systems will include the level 3.