PC Clinic Site is based in Blackheath, London SE3 7DT. Our Tech Center is open Monday to Saturday 9.30am to 17.30pm. www.pcclinicsite.com
Friday, 19 December 2008
Security Update for Internet Explorer (960714)
Thursday, 18 December 2008
How to best use "alt" tag
With the increase of bandwidth and the shift from dial up to broadband more and more people are implement more design into their site to make it more attractive to the audience which they are hoping to inspire.
Now we understand that Images are a great way to enhance a website from a user's viewpoint. However, it is important to note that search engine crawlers cannot really "see" images. So, if you have lots of images that contain textual content within the image itself, this content will not be seen by the crawlers. The alt attribute allows web pages to assign specific text as the "alternative" content for images for those that cannot view the images themselves. This can be search engine crawlers or text-only web browsers. We would advise you to use "alt"tag for every image in your webpage.
Tip: The best way to name an image is to try to define the image rather than just straight forward naming it. Since search engines don’t read images giving images a meaning would go a long way in assisting search engine.
For Example: instead of simply naming your image “Cat” you can go a bit further by adding bit of description for instance a cat with black and white fur.
Tip: Having too many images on the web page means the user has to wait longer to view your page. We recommend having fewer or smaller images. Sometimes simply resizing your images will make them smaller in size, and your web page will load faster.
If you find this post helpful live a comment.
Tag (metadata)
A. Metadata
A tag is a non-hierarchical keyword or term assigned to a piece of information (such as an internet bookmark, digital image, or computer file). This kind of metadata helps describe an item and allows it to be found again by browsing or searching. Tags are chosen informally and personally by the item's creator or by its viewer, depending on the system.
Meta Title: summarize what the page is about. (Similar to a book title or article title).when a search for a site is made the title is the portion of information that is displayed above the browser. It is the default title your browser and book-marking sites will automatically use when people decide to “save” your site for future use. Also, it is what search engines use to figure out what the site is about. Compared to everything else on your page, it gets the most “weight” from the search engines and so it should be the most important element on your page. When choosing your title we recommend using clear and easy to read title.
Keywords: This component provides a set of key terms or words that describe the web page. At some point in search engine life the keywords element was very heavily used or weighted towards determining the context of a web page. It was a quick way for the search engine to “Determine” what a web page was about instead of having to scan all the content. But, as time went on, people started abusing the use of keywords. Search optimizers used popular or highly search meta-keywords element that were unrelated to their web page’s content to obtain higher ranking in their searches. As a result of this abuse, the importance of the meta-keywords element has been greatly weakened. Search engines no longer look at this information as the authoritative way to outline context.
Tip: We still recommend the use of meta-keywords element but make sure that they relate to your content otherwise you may be penalize by search engine. Also from our research we know that search engines don't weigh keywords as heavily as they used to, but when used in the right manner can help with your page optimization. We would advocate keeping the keywords to 10 or less placing the most important keyword first.
We would like to hear your comments if you find this post helpful.
Controlling how search engines access and index your website
I'm often asked about how Google and search engines work. One key question is: how does Google know what parts of a website the site owner wants to have show up in search results? Can publishers specify that some parts of the site should be private and non-searchable? The good news is that those who publish on the web have a lot of control over which pages should appear in search results.
The key is a simple file called robots.txt that has been an industry standard for many years. It lets a site owner control how search engines access their web site. Withrobots.txt you can control access at multiple levels -- the entire site, through individual directories, pages of a specific type, down to individual pages. Effective use of robots.txtgives you a lot of control over how your site is searched, but its not always obvious how to achieve exactly what you want. This is the first of a series of posts on how to userobots.txt to control access to your content.
What does robots.txt do?
The web is big. Really big. You just won't believe how vastly hugely mind-bogglingly big it is. I mean, you might think it's a lot of work maintaining your website, but that's just peanuts to the whole web. (with profound apologies to Douglas Adams)
Search engines like Google read through all this information and create an index of it. The index allows a search engine to take a query from users and show all the pages on the web that match it.
In order to do this Google has a set of computers that continually crawl the web. They have a list of all the websites that Google knows about and read all the pages on each of those sites. Together these machines are known as the Googlebot. In general you want Googlebot to access your site so your web pages can be found by people searching on Google.
However, you may have a few pages on your site you don't want in Google's index. For example, you might have a directory that contains internal logs, or you may have news articles that require payment to access. You can exclude pages from Google's crawler by creating a text file called robots.txt and placing it in the root directory. The robots.txt file contains a list of the pages that search engines shouldn't access. Creating a robots.txt is straightforward and it allows you a sophisticated level of control over how search engines can access your web site.
Fine-grained control
In addition to the robots.txt file -- which allows you to concisely specify instructions for a large number of files on your web site -- you can use the robots META tag for fine-grain control over individual pages on your site. To implement this, simply add specific META tags to HTML pages to control how each individual page is indexed. Together, robots.txt and META tags give you the flexibility to express complex access policies relatively easily.
A simple example
Here is a simple example of a robots.txt file.
User-Agent: Googlebot
Disallow: /logs/
The User-Agent line specifies that the next section is a set of instructions just for the Googlebot. All the major search engines read and obey the instructions you put inrobots.txt, and you can specify different rules for different search engines if you want to. The Disallow line tells Googlebot not to access files in the logs sub-directory of your site. The contents of the pages you put into the logs directory will not show up in Google search results.
Preventing access to a file
If you have a news article on your site that is only accessible by registered users, you'll want it excluded from Google's results. To do this, simply add a META tag into the html file, so it starts something like:...
This stops Google from indexing this file. META tags are particularly useful if you have permission to edit the individual files but not the site-wide robots.txt. They also allow you to specify complex access-control policies on a page-by-page basis.
Posted by Dan Crow, Product Manager
Wednesday, 17 December 2008
Internet Explorer under FireFox
If the security issue that Internet Explorer 7 is facing at the moment is not proof enough to convince you that Internet Explorer 7 is very unsecure we have added a few causes for concern both from a technical and programming point of view:
Technical view point
Internet Explorer is an integral part of the Microsoft operating system. In fact Internet Explorer happens to be a key element in the operating system core. The two marry up and you cannot have one without the other. We should think of them as husband and wife! If a bug attacks Internet Explorer potentially that could lead to crashing your entire operating system. Worst case scenario: a clean install is your only option. However, on the other hand if this very same bug affected Firefox or any of the other third party browsers since they are add-ons and not actually part of the operating system, the cure is simply uninstalling and reinstalling it again. The choice is yours! That is if you can still find your product key or CD. After all, long gone are the days when your computer came with a CD - nowadays everything comes pre-installed and if you are not technical and never made a recovery disk then we think your only option would be off to the shop or perhaps off to our PC Clinic.
Programming point of view
Most programmers would agree with Internet Explorer is a total nightmare. It would be beneficial if Microsoft took the time to look at some of the other browsers, review their codes and form some agreement with them that would enable Microsoft to set a standard so that coding would be much simpler and straight forward. Maybe then this could assist programmers in creating more secure sites with less vulnerability for hackers
Being in the web business means we have to constantly keep checking Internet Explorer for bugs every time we write a piece of code. To top things off Microsoft had to go and make Internet Explorer 6 and Internet Explorer 7 behave completely different to each other so now instead of concentrating on fixing bugs for just Internet Explorer 6 you have to also do the same for Internet Explorer 7 – e.g. when writing Cascading Style Sheets (CSS), you have to write one for Internet Explorer 6, one for Internet Explorer 7 and another for all other browsers – it would be useful if you could have one CSS rather than having to write three!
VERDICT: Internet Explorer is extremely frustrating!!
Microsoft plans quick fix for Internet Explorer
The emergency patch should be available from 1800 GMT on 17 December, Microsoft has said.
The flaw in Microsoft's Internet Explorer browser could allow criminals to take control of people's computers and steal passwords.
Internet Explorer is used by the vast majority of computer users and the flaw could affect all versions of it.
So far the vulnerability has affected only machines running Internet Explorer 7.
''Microsoft teams worldwide have been working around the clock to develop a security update to help protect our customers," the software firm said in a statement.
''Until the update is available, Microsoft strongly encourages customers to follow the Protect Your Computer Guidance at www.microsoft.com/protect, which includes activating the Automatic Update setting in Windows to ensure that they receive the update as soon as it is available," the statement read.
Potential danger
It is relatively unusual for Microsoft to issue what it calls an "out-of-band" security bulletin and experts are reading the decision to rush out a patch as evidence of the potential danger of the flaw.
Some experts have suggested that users switch browsers until the flaw is fixed.
But Graham Cluley, senior consultant with security firm Sophos, said no browser is exempt from problems.
MICROSOFT SECURITY ADVICE
Tuesday, 16 December 2008
Part 2 - How to get a top ranking in Google
Why do most people find it difficult to get a top ranking in Google and other search engines?
In this second part we will show you what you can do to get your site on the first page in Google.
Create a date log table for Google
To create a date log table for Google is really simple. Google may index or crawl sites more often than others; it all depends on whether your site is static or dynamic. We would recommend when building a site to build a dynamic site for two simple reasons
(a) It is easier to update
(b) Dynamic site are more search engine friendly
TIP: We recommend a dynamic site with a CMS (content management system). There are a number of open source CMS available on the web for download.
To create a data log table for Google go to Google search and type in your web address. Locate your web address and select ‘Cached’.
TIP: The date pattern might change but by keeping a good date log table you will be on top of any modification that Google make in respect to crawling your site.
Specific Keyword, Title and Description
Google in recent time has changed the way it searches sites. Not to long ago Google used meta keyword tags for displaying search result. Google has changed its policies of search because of misuse of keywords to relative content search. Google has since adopted display search base on content and what we mean by that is your, description title and keywords all have to marry up to make a good relative content page. Take a book for example. The table of content lists the topics in the book. Similarly your keywords act like a 'table of content'.
Next comes your title or topic which gives a simple description of the body or content followed by the description or body which summaries the topic in question. This is the same method you should imply in every webpage.
TIP: When writing a content page you should always make your Keywords and Title part of your description.
How you can use your date log table for Google to optimize your content.
In this case let’s use a search term instead of a web address. To demonstrate this we will use the words PC Safety Tip. Let’s begin by launching Google search.
Then type in the words ‘PC Safety Tip’ into the Google search box and press enter. From the search results locate you site and select ‘Cached’. Google will display your search term on your page with bold and highlighted text take note of this term and use your Date Log Table to optimise for this search term.
In Part 3 of this blog we will show you a few more useful tips and how you can change your searches into cash!
Why not subscribed to our blog for loads more tips and insights.
Monday, 16 June 2008
Part 1 - How to get a top ranking in Google
Below are 5 useful tips to optimise your site:-
1. Optimise your site
Your code should be coded properly to maximize the effect of your ranking in search engines. A good way to measure this is to check if your website is 'web standard compliant'. If it is, your site is also more likely to load faster and be more accessible for visually impaired users.
2. Update your website every day.
Search engines favor websites with fresh content very favorably. Your site should be powered by a simple 'content management system' so that you and your team can update your website quickly and easily.
3. Think long term
Create a 12 month plan to build up a large library of interesting articles and pages about the products and services you are trying to promote. Your aim should be to have hundreds of good quality targeted pages on your site.
4. Be everywhere
Get onto other websites that people use to get information. You could submit news articles to online magazines, add an entry onto wikis such Wikipedia, post a promotional video on video sharing sites such as YouTube, or even setup an official company presence on social networking sites such as facebook.
5. Get involved
Interact with other people in your online world. Post useful comments on forums and blogs related to the products and services your business offers. These comments will promote your website and also increase the visibility of your business to potential customers.
If you found this blog useful why not visit us at www.lacou.com where you will find a wealth of information.