A lot of coding is involved in the building of a website, especially as you add more functions and features to your site. If your code is unorganized and messy, it can result in a variety of issues. Not only can it affect how your website is supposed to function, but it can affect the ability of search engines to properly index your site’s content, thereby hurting your search rankings. Some common website coding problems include:
Incorrect Robot.txt Files
Search engines like Google use bots to crawl through the content on any given site and to index it for search ranking purposes. Robot.txt files, also known as the robots exclusion protocol, lets web crawlers and other web bots know if there are certain areas of your site that you do not want to be processed or scanned. Web crawlers will check for robot.txt files before they begin crawling through the site. If you use robot.txt files incorrectly, the web crawlers may not read them correctly, resulting in the entirety of your site being crawled and indexed. Here are a few tips for using correct robot.txt files:
-
Robot.txt files must be placed in the top-level directory of your site in order to be found
-
Robot.txt files must be named in all lower case, such as “robots.txt”
-
Every subdomain must have their own robots.txt files
-
Indicate the location of any sitemaps associated with your domain at the bottom of your robots.txt files
-
Do not use robot.txt files to hide private user information as robot.txt files are publically available
Lack Of A Sitemap File
A sitemap is a file that provides web crawlers with information about all of the pages, videos, and other files found on your website. Creating a sitemap provides search engines with a road map to your website that helps ensure that they index everything you want them to. Sitemaps can also provide information on what kind of content can be found on each page (such as images or videos), when your pages were last updated, how often your pages change, and if you have any alternate language versions of your pages.
Without a sitemap, web crawlers may miss some of your pages. This can happen if you have content pages that are isolated or not properly linked to one another. Newer sites may have fewer external links as well, which can make pages more difficult to discover. Basically, a sitemap will help ensure that the search engines get the information they need about your website in order to properly index it and rank it.
Extreme Use Of Subfolders in URL Strings
A visitor that explores deep into your website may end up on a page with a URL that has way too many subfolders. This means that the URL is particularly long and has numerous slashes throughout. In many cases, it’s unnecessarily complicated and you should simplify the URL string. While a long URL string full of subfolders won’t necessarily hurt the performance of your site (nor will it hurt your page’s ranking according to Google), it will end up making it more challenging to edit your URL strings. It can also make it more inconvenient for users who want to copy and paste your URL to share with others.
Multiple 404 Errors and Redirects
404 errors are caused by broken links. A broken link means that the user cannot visit the page you are linking to, whether it’s an external link or an internal link, making their website experience difficult. 404 redirects are pages that load letting the users know that the page is unavailable. There are many reasons the page may be unavailable -- it may not exist anymore, it may have been updated, or the user may need to refine their search. It’s important to set up 404 redirects to let users know they are on the right page but that something was wrong with the link.
While 404 redirects are generally a good thing, if you have too many it can affect not just the user experience, but also your search rankings. Fortunately, you can monitor 404 errors using Google Analytics. This means that you can pinpoint 404 errors early on and fix them before they cause more issues for your users.
No HTTPS Found
When building a website, always use HTTPS protocol and not HTTP. This is especially true if you’re requesting personal information from visitors, such as email addresses or credit card numbers. HTTPS is much more secure and helps to encrypt any data that is transmitted from a user to your website, ensuring that if the data is somehow hacked and stolen, it cannot be used.
Secondly, when creating your URL, decide between using “www” and not using “www.” Most people can identify a website address by the “.com” and often don’t even type in “www” into the address bar anymore. However, using a “www” prefix remains technically accurate and helps distinguish your address from similar URLs for protocols such as FTP or mail.
Additionally, if your top-level domain is a bit less recognizable or ambiguous, then adding “www” helps remove doubt that the URL is a web address. If you don’t use “www” then you will need to set your root domain DNS A-record to point to the IP address of your web server. This can end up being a bit too rigid if you encounter issues with performance or availability. On top of that, cookies set for domains not using “www” will be shared throughout all of your sub-domains whether the application uses the data or not. It’s also worth noting that “www” prefixes are needed for certain applications, such as word processors and email clients that transform text to links.
In general, choose a web server configuration setting that allows visitors to make a browser request using your domain with either “www” or without it.