1. SPEForums need your contributions to build up a strong repository of performance engineering resources.

    Dismiss Notice
  2. Whatsapp Group For Software Performance Engineering Professionals!! Click here to get added . No Spam for strictly professional discussions only.
    Dismiss Notice
Dismiss Notice
Hello Guest,

Please reach out to us at admin@speforums.com/+91 8600177662 in case you face any issues with SPEForums. Share your feedback - Click Here

Please support us by sharing performance engineering tutorials/resources with SPEForums.com

-Crew
Color
Background color
Background image
Border Color
Font Type
Font Size
  1. LoadRunner Script Converter NeoLoad’s LoadRunner script converter is now Open Source! Anyone can contribute to this project to make the path between LoadRunner and NeoLoad even more straightforward. Visit the script converter project page on GitHub.More extensive conversion coverage.

    The LoadRunner converter supports more functions and covers SAP GUI scripts, enabling you to convert more LoadRunner scripts. Convert existing LoadRunner SAP GUI scripts to NeoLoad shifting quickly to Agile load testing. What's new in NeoLoad 6.7? New features To explore the new exciting features, see

    What’s New in NeoLoad 6.7. Overriding scenario settings (load profile, test duration) with a YAML file when launching tests through the command line or with a CI Plugin. See ‘-project’ option in command line documentation.Swagger/OpenAPI import: Create easily REST API calls from a Swagger/OpenAPI descriptor. See Create a User Path based on API requests.AppDynamics integration on Neotys Labs to correlate data from AppDynamics with NeoLoad. SAP GUI: Remotely manage on the same machine several Load Generators running in several Terminal Server sessions. See the Terminal Services installation guide.

    Read More Here -> https://www.neotys.com/neoload/whats-new
  2. A recent study was done on retail conversions primarily to answer following questions
    • What is the “magic number” for page load time that yields the highest conversion rate?
    • What is the impact of one second of performance improvement (or slowdown) on conversion rate/bounce rate/session length?
    • How are high-performance pages different – in terms of size, complexity, and resources – from pages that perform poorly?
    Some of the key findings from this study is documented below

    Finding 1: If you’re only focusing on your site’s average or median conversion rates, you’re missing crucial insights

    [​IMG]


    Finding 2: Optimal load times for peak conversions ranged from 1.8 to 2.7 seconds across device types

    [​IMG]

    Finding 3: Even 100ms delays correlate to lower conversion rates

    [​IMG]

    Finding 4: A two-second delay hurt bounce rates by up to 103%

    [​IMG]


    Finding 5: Start render time is an important metric

    Finding 6: A two-second delay correlated with up to a 51% decrease in session length

    [​IMG]

    You can read the full study document on the soasta website here

    Its remarkable to see how such a small change in page responsiveness can bring a dramatic change in online business.

  3. Amazon's web hosting services are one of the most widely used and popular. An outage on the same impacts a huge bunch of websites and services, which was infact observed yesterday with amazon reporting high error rates in one of the regions of its S3 Web services.

    upload_2017-3-1_11-10-33.png

    Websites built with site creation sevice WIX were all reported down - Trello, Quora, IFTTT, and Splitwise. Even alexa was struggling to be online for a period of time.

    The funny part is the website testing service (Isitdownrightnow.com) went down as well having no connection to the amazon infrastructure. It turn out to be a ripple effect of amazon outage, and it has seen an overload of people trying to check websites !!

    It gets me thinking on importance of performance and scalability and how we as performance engineers underscore such incidents by designing rock solid test cases and scalability considerations.
  4. Hackerrank a free coding challenge website has played around with the analytics of data from hack competitions.They have come around with some good information on how the programming quality varies with geographic areas.

    The data set was huge comprising of 1.5 million developers. This study ranked which countries were best overall, which types of challenges were most popular, which countries dominated in each type of challenge, and which languages each country preferred.

    The results that you would assume would be dominated by US or India (which though had maximum participant) , has not quite come out that way.

    upload_2017-3-1_10-46-43.png
  5. Its nice to use a tool with your local language and is very much possible with the Jmeter !! Yes you heard it right. Jmeter is completely available in french and is party available in other languages as well.

    How to change the Jmeter language?

    Edit jmeter.properties and uncomment

    Code:
    language=en

    LINUX => Edit jmeter.sh:
    Add at start of file:

    Code:
    export JVM_ARGS="-Duser.language=en -Duser.region=EN"

    WINDOWS => Edit jmeter.bat:
    Add at start of file:

    Code:
    set JVM_ARGS="-Duser.language=en -Duser.region=EN"

  6. The 3rd annual conference event for CMG India is happening in mumbai and the conference is happening on on 3rd and 4th Dec.

    The conference registration, page is open for registrations, So don't wait till last minute. Book your seat now.

    Entire conference program is also now available at the conference program site. This includes keynotes from technology leaders from National Stock Exchange (NSE), State Bank of India (SBI), National Securities and Depository Limited (NSDL), Aerospike, Intel and IITB.

    This is a rare opportunity to listen to top technology leaders from the IT Industry/Academia and be part of in-depth technical sessions in areas such as big data performance, IOT, Large scale systems architecture and design in Mumbai so don’t miss it. The conference registrations (both Individual and Group bookings ) are available online through conference registration site. Steps for group bookings are listed in the note below. For any queries you can send an email to annual.conference@cmgindia.org.

    NOTE :

    Steps For CMG India Mumbai conference Group bookings

    1) Go to www.cmgindia.org -> click on link on left frame in red title CMG India 3rd Annual Conference
    2) It will take you to CMG Annual conference page -> Browse below to go to the registration portal
    3) Registration portal is a different microsite
    4) Browse to the Registration Form
    5) Person applying for group discount should fill the registration form
    6) You will then be taken to PayUMoney site where you will be presented with options to select your group discounts
    7) Once payment is done you will receive an email saying "Registration Successful"
    8) Forward that mail with names of people for whom you have done bulk registration for, to annual.conference@cmgindia.org without fail
  7. upload_2016-10-28_8-23-28.png

    Grant Engelbrecht, application performance expert at Dynatrace,demonstrates how you can be a performance superhero by ensuring high performance and stellar quality for your application software with gap-free, deep application monitoring through the entire software lifecycle.

    The webinar covers hands-on demonstration and discover how you can:
    • Monitor and optimize every single transaction with gap-free code-level data
    • Detect and diagnose problems in real time for fast resolution
    • Automatically see and analyze every user transaction, all the time
    • Integrate application monitoring with CI/CD solutions to automatically check performance in your build pipeline, before you ever get to production
    Plus, see what's new in the Application Monitoring and User Experience Management 6.5 update!

    Click here to watch recording of the webinar
  8. Task Description :: To retrieve an email from gmail using loadrunner

    Prerequisite : A working gmail account with password.

    We created a script using POP3 Protocol in loadrunner and used following code to retrieve the emails

    Code:
    //int totalMessages;
        pop3_logon("Login", "URL=pop3s://{PUserID}:{PPassword}@pop.gmail.com:995","STARTTLS",LAST);
        //pop3_logon("Login", "URL=pop3://user0004t:my_pwd@techno.merc-int.com", LAST );
        // List all messages on the server and receive that value
        //totalMessages = pop3_list("POP3", LAST);
        // Display the received value (It is also displayed by the pop3_list function)
    //    lr_log_message("There are %d messages.\r\n\r\n", totalMessages);
        // Retrieve 5 messages on the server without deleting them
        pop3_retrieve("POP3", "RetrieveList=1:2:3:4", "DeleteMail=false", "SaveTo=test",LAST);
        pop3_logoff();
    
    Using the above code we were successfully able to retrieve email number 1, 2, 3, 4 from our gmail id.
  9. Much hype is given to SSD performance and SSD is considered to be the next generation storage devices. The objective of our testing was to test the performance of an SSD in a real-world environment against workloads like OLTP, DSS etc., and to compare their performance with the traditional hard disk drives.

    Drive Specifications – SSD
    Cost – 9000$, Make – HP, Interface – SAS,

    Drive Specifications – HDD
    Cost – 500$ , Make – HP, Interface – SAS



    SSD vs HDD – An Overview

    A solid-state drive (SSD) is a high performance plug-and-play data storage device which uses integrated circuit assemblies as memory to store data persistently. Unlike traditional disk drives, SSD does not have any movable part which makes it more durable and shock resistant. Other than memory chips, SSD has its own memory bus, a CPU and a battery card. Due to presence of an internal CPU, SSDs can manage their data storage and hence, they are a lot faster than conventional rotating hard disks and produce highest possible I/O rates.

    Features of SSD
    Extremely low access times:
    SSDs have extremely low access times, as they don't require the storage medium to spin up for data access. Thus, all data on an SSD can be accessed instantaneously without the delays of mechanical "seek" times.
    Durable and shock-resistant:
    SSD have a non-mechanical design consisting of NAND flash chips which makes them more durable and more shock resistant than traditional hard disk drives.
    Faster:
    SSDs can have a much faster performance, almost instantaneous data access, quicker boot ups, faster file transfers, and an overall snappier computing experience than hard drives. HDDs can only access the data faster the closer it is from the read write heads, while all parts of the SSD can be accessed at once.
    Cooler and Quieter:
    With no moving parts, SSDs run at near silent operation and require very little power to operate that translates into significantly less heat output by your system.


    Parameters Considered for Benchmarking

    Various parameters that should be considered when evaluating the performance of SSDs are:
    1. # of Outstanding I/O (or Queue depth)
    2. Request Size
    3. Read/Write ratio
    4. Transfer ratio (Random/Sequential)

    Tool Used For Comparing Performance

    Tool used for bench marking - For evaluating the disk I/O performance, the most widely used tool is Iometer. The main reason for this being it is an open source tool and is readily available (You can download it from http://www.iometer.org ). The other reason is that, with Iometer almost any type workload mix can be generated and tested on the drives

    Iometer can be used for measurement and characterization of:

    • Performance of disk and network controllers.
    • Bandwidth and latency capabilities of buses.
    • Network throughput to attached drives.
    • Shared bus performance.
    • System-level hard drive performance.
    • System-level network performance.
    upload_2016-4-28_23-22-29.png


    Drive Specifications Used For Comparing Performance

    upload_2016-4-28_23-25-48.png


    Load Profiles Used for simulating Loads

    Various types of load profiles were used to generate sppecific app workloads to compare performance between SSD and HDD. These profiles resemble the typical disc workloads of popular application types like - File Copy, Backup, Database, OLTP etc

    upload_2016-4-28_23-27-22.png


    Test Results - OLTP :


    We oserved that SSD drives outperform HDD drives - Below IOPS distribution clearly shows that SSD stands out a clear winner. Improvement provided by SSD is about 16 times better than HDD’s performance.

    upload_2016-4-28_23-34-3.png


    Test Results - BACKUP Application:
    A Backup workload is represented by large block sequential reads. SSD does not have much improvement over HDD but then also, it is almost twice.

    upload_2016-4-28_23-36-26.png

    Test Results - Restore Application:

    A Backup workload is represented by large block sequential reads. SSD does not have much improvement over HDD but then also, it is almost twice.

    upload_2016-4-28_23-40-43.png

    Test Results - File Copy Application:

    File copying generally refers to creating a new file which has the same content as the existing file. By definition you can note that, it is actually performing a read operation on the existing file and writing into the new file created. SSD provides almost 10 times improvements over HDD.

    upload_2016-4-28_23-43-29.png
  10. To connect you to information in real time, it’s important for Twitter to be fast. That’s why we’ve been reviewing our entire technology stack to optimize for speed.

    When we shipped #NewTwitter in September 2010, we built it around a web application architecture that pushed all of the UI rendering and logic to JavaScript running on our users’ browsers and consumed the Twitter REST API directly, in a similar way to our mobile clients. That architecture broke new ground by offering a number of advantages over a more traditional approach, but it lacked support for various optimizations available only on the server.

    To improve the twitter.com experience for everyone, we’ve been working to take back control of our front-end performance by moving the rendering to the server. This has allowed us to drop our initial page load times to 1/5th of what they were previously and reduce differences in performance across browsers.

    On top of the rendered pages, we asynchronously bootstrap a new modular JavaScript application to provide the fully-featured interactive experience our users expect. This new framework will help us rapidly develop new Twitter features, take advantage of new browser technology, and ultimately provide the best experience to as many people as possible.

    This week, we rolled out the re-architected version of one of our most visited pages, the Tweet permalink page. We’ll continue to roll out this new framework to the rest of the site in the coming weeks, so we’d like to take you on a tour of some of the improvements.

    No more #!
    The first thing that you might notice is that permalink URLs are now simpler: they no longer use the hashbang (#!). While hashbang-style URLs have a handful of limitations, our primary reason for this change is to improve initial page-load performance.

    When you come to twitter.com, we want you to see content as soon as possible. With hashbang URLs, the browser needs to download an HTML page, download and execute some JavaScript, recognize the hashbang path (which is only visible to the browser), then fetch and render the content for that URL. By removing the need to handle routing on the client, we remove many of these steps and reduce the time it takes for you to find out what’s happening on twitter.com.

    Reducing time to first tweet
    Before starting any of this work we added instrumentation to find the performance pain points and identify which categories of users we could serve better. The most important metric we used was “time to first Tweet”. This is a measurement we took from a sample of users, (using theNavigation Timing API) of the amount of time it takes from navigation (clicking the link) to viewing the first Tweet on each page’s timeline. The metric gives us a good idea of how snappy the site feels.

    Looking at the components that make up this measurement, we discovered that the raw parsing and execution of JavaScript caused massive outliers in perceived rendering speed. In our fully client-side architecture, you don’t see anything until our JavaScript is downloaded and executed. The problem is further exacerbated if you do not have a high-specification machine or if you’re running an older browser. The bottom line is that a client-side architecture leads to slower performance because most of the code is being executed on our users’ machines rather than our own.

    There are a variety of options for improving the performance of our JavaScript, but we wanted to do even better. We took the execution of JavaScript completely out of our render path. By rendering our page content on the server and deferring all JavaScript execution until well after that content has been rendered, we’ve dropped the time to first Tweet to one-fifth of what it was.

    Loading only what we need
    Now that we’re delivering page content faster, the next step is to ensure that our JavaScript is loaded and the application is interactive as soon as possible. To do that, we need to minimize the amount of JavaScript we use: smaller payload over the wire, fewer lines of code to parse, faster to execute. To make sure we only download the JavaScript necessary for the page to work, we needed to get a firm grip on our dependencies.

    To do this, we opted to arrange all our code as CommonJS modules, delivered via AMD. This means that each piece of our code explicitly declares what it needs to execute which, firstly, is a win for developer productivity. When working on any one module, we can easily understand what dependencies it relies on, rather than the typical browser JavaScript situation in which code depends on an implicit load order and globally accessible properties.

    Modules let us separate the loading and the evaluation of our code. This means that we can bundle our code in the most efficient manner for delivery and leave the evaluation order up to the dependency loader. We can tune how we bundle our code, lazily load parts of it, download pieces in parallel, separate it into any number of files, and more — all without the author of the code having to know or care about this. Our JavaScript bundles are built programmatically by a tool, similar to the RequireJS optimizer, that crawls each file to build a dependency tree. This dependency tree lets us design how we bundle our code, and rather than downloading the kitchen sink every time, we only download the code we need — and then only execute that code when required by the application.

    What’s next?
    We’re currently rolling out this new architecture across the site. Once our pages are running on this new foundation, we will do more to further improve performance. For example, we will implement the History API to allow partial page reloads in browsers that support it, and begin to overhaul the server side of the application.

    If you want to know more about these changes, come and see us at the Fluent Conference next week. We’ll speak about the details behind our rebuild of twitter.com and host a JavaScript Happy Hour at Twitter HQ on May 31.