This is an archived forum post. The information may be outdated. Contact us if you have any questions.

510 - Timed out. Can't load the specified URL - ASP.NET

aptagen wrote on 2012-07-11:

I just bought the software to convert HTML to PDF and my objective in using this is to create a "pdf catalog" of our products that we offer here at Aptagen. The process is that the plugin goes to this page and makes a pdf of it. I have it set to 8.5"x11" so that it breaks into a new page every time it reaches this dimension. However, I get the following error when I attempt to create the pdf:

510 - Timed out. Can't load the specified URL. List of the resources that can't be loaded:

The images above do load correctly because if you put them in a browser, they show up on screen.
What I'm trying to do essentially is convert the following website ( into a pdf. I don't know if it's just because the size of the page (its very long). However, I don't plan on running this process more than 3 times a month so it would be nice, even though it may take up to a minute, for it to produce the pdf. This is our way of dynamically creating an online product catalog with the effects of css and html.

Any help would be great.
support wrote on 2012-07-11:
Hello, currently returns "404 Not Found". Please let us know once it is fixed and we will look it.
aptagen wrote on 2012-07-11:
Yeah I was hoping you didn't click on it immediately. Nice response time though. I have fixed the links. It's suppose to say GeneratePDFPage.aspx.
Forgot the 'Page' word at the end. But I've updated the links in my first post like 10 min. ago. Sorry for the confusion.
support wrote on 2012-07-12:
You are getting this error since the web page takes too long to load completely. The API aborts a requests if it takes longer than 40 seconds - more about the API limits can be found at

The page loads more than 320 images in total size of 23.1MB - are you able to split it into smaller parts? If yes, then you could generate a PDF from each such smaller part separately and then join them into a single PDF file using pdftk. Is this a viable option for you?
deepanshu1987 wrote on 2012-07-12:
we are also facing the same problem. the link i was earlier able to get into pdf in 5 to 10 sec is now not getting downloaded.
510 - Timed out. Can't load the specified URL. List of the resources that can't be loaded
aptagen wrote on 2012-07-12:
Yes splitting the page into sub sections is viable as long as there is an efficient way to string them together into one pdf at the end. I will check out this pdftk. However, I haven't worked with memory streams for awhile. Any pointers on combining like 4 MemoryStream objects (this is what I am returned with from the convertHtmlToPdf() function. Do you simply append them or is it more in depth than that.
aptagen wrote on 2012-07-12:
Actually, have you heard of iTextSharp? I already have the plugin for this and it looks like it does a lot of the similiar things that pdftk does. But how do I keep the 5-6 pdfs in memory (not write them to a folder individually) but retain the objects in memory to create one large memory stream eventually? And if your familiar with iTextSharp, how would I go about transforming a memoryStream into something useable in iTextSharp. But I will test if I can split them up first.
support wrote on 2012-07-13:
Just appending the streams will not work. I will put together and post here a code example that will show how to join individual streams with pdftk.
aptagen wrote on 2012-07-13:
Ok so I figured out a solution that works, but a little "choppy" since I haven't figured out how to keep all the objects in memory.
I had to separate the catalog into 5 individual pdf converting requests and then saved them to a folder. (Using iTextSharp)

here's the code for that:

public class PDFCreator
    public PDFCreator() { }

    public static void ConvertHtmlToPDF(string uri)
        System.Web.HttpResponse Response = System.Web.HttpContext.Current.Response;
            // create an API client instance - 
            pdfcrowd.Client client = new pdfcrowd.Client("UNAME", "API_CODE");

            int partNum = 1;
            FileStream StreamPart1 = new FileStream("../documents/catalog/part" + partNum.ToString() + ".pdf", FileMode.CreateNew);
            client.convertURI(uri + "?start_page=1", StreamPart1);
            FileStream StreamPart2 = new FileStream("../documents/catalog/part" + partNum.ToString() + ".pdf", FileMode.CreateNew);
            client.convertURI(uri + "?start_page=27", StreamPart2);
            FileStream StreamPart3 = new FileStream(../documents/catalog/part" + partNum.ToString() + ".pdf", FileMode.CreateNew);
            client.convertURI(uri + "?start_page=53", StreamPart3);
            FileStream StreamPart4 = new FileStream("../documents/catalog/part" + partNum.ToString() + ".pdf", FileMode.CreateNew);
            client.convertURI(uri + "?start_page=79", StreamPart4);
            FileStream StreamPart5 = new FileStream("../documents/catalog/part" + partNum.ToString() + ".pdf", FileMode.CreateNew);
            client.convertURI(uri + "?start_page=105", StreamPart5);
        catch (pdfcrowd.Error why)

So after executing the above function, I'm left with 5 pdfs named part1, part2, part3, etc...

After this I use the following code to merge them into one and add page numbers:

public class AptaCatalog
	public AptaCatalog()

    public static void CreateCatalogFromParts()
        string destination_file = ".../documents/catalog/catalog.pdf";
        string[] source_files = new string[5] { ".../documents/catalog/part1.pdf", 

        MergeFiles(destination_file, source_files);

    public static void MergeFiles(string destinationFile, string[] sourceFiles)
            int f = 0;
            String outFile = destinationFile;
            Document document = null;
            PdfCopy writer = null;
            while (f < sourceFiles.Length)
                // Create a reader for a certain document
                PdfReader reader = new PdfReader(sourceFiles[f]);

                // Retrieve the total number of pages
                int n = reader.NumberOfPages;
                //Trace.WriteLine("There are " + n + " pages in " + sourceFiles[f]);
                if (f == 0)
                    // Step 1: Creation of a document-object
                    document = new Document(reader.GetPageSizeWithRotation(1));
                    // Step 2: Create a writer that listens to the document
                    writer = new PdfCopy(document, new FileStream(outFile, FileMode.Create));
                    // Step 3: Open the document
                // Step 4: Add content
                PdfImportedPage page;
                for (int i = 0; i < n; )
                    page = writer.GetImportedPage(reader, i);
                PRAcroForm form = reader.AcroForm;
                if (form != null)
            // Step 5: Close the document
        catch (Exception)
            //handle exception

    public static void AppendPageNumbers()
        PdfReader reader1 = new PdfReader(".../documents/catalog/catalog.pdf");
        PdfStamper stamper = new PdfStamper(reader1, new FileStream(".../documents/catalog/Aptagen_AptamerCatalog_" + DateTime.Now.Year.ToString() + ".pdf", FileMode.Create));
        BaseFont font = BaseFont.CreateFont(BaseFont.TIMES_ROMAN, BaseFont.CP1252, false); // Helvetica, WinAnsiEncoding
        for (int i = 0; i < reader1.NumberOfPages; ++i)
            if (i != 0)
                PdfContentByte overContent = stamper.GetOverContent(i + 1);
                overContent.SetFontAndSize(font, 10.0f);
                overContent.SetTextMatrix(270, 15);
                overContent.ShowText("Page " + (i + 1) + " of " + reader1.NumberOfPages.ToString());

So after calling AptaCatalog.CreateCatalogFromParts(), I'm left with a 112 page complete pdf document, super quality from your software's algorithm and with page numbers at the bottom of each page. This process would generate a 8.5" x 11" document (816px x 1056px).
support wrote on 2012-07-15:
Great! I'm not familiar with iTextSharp but it seems quite easy to work with. Btw, the solution with pdftk I had in mind would have to use filesystem for the individual PDF files as well.