Community Support Forums — WordPress® ( Users Helping Users ) — 2011-12-24T14:00:01-05:00 http://www.primothemes.com/forums/feed.php?f=4&t=15853 2011-12-24T14:00:01-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=58734#p58734 <![CDATA[Re: Error code: 400]]> Have a great holiday!

Statistics: Posted by Jason Caldwell — December 24th, 2011, 2:00 pm


]]>
2011-12-21T12:57:22-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=58409#p58409 <![CDATA[Re: Error code: 400]]> Reinstall the plugin and then add the keys and it worked and gave message to wait 30 mins to propagate.

Thank you for your help Jason Caldwell

Statistics: Posted by govpatel — December 21st, 2011, 12:57 pm


]]>
2011-12-21T12:37:03-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=58408#p58408 <![CDATA[Re: Error code: 400]]> Statistics: Posted by govpatel — December 21st, 2011, 12:37 pm


]]>
2011-12-21T11:08:31-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=58404#p58404 <![CDATA[Re: Error code: 400]]>
govpatel wrote:
How do I get s2Member v111216 as I have this Version 111220 i wordpress admin panel yet.
That's fine. Our versions match the date of release. So v111220 is higher than v111216. You're good there. I'll see what I can do to reproduce this specific error and update this thread asap.

Error code: 400. Error Message: Unable to delete existing Amazon® CloudFront Downloads Distro. Unable to delete existing Amazon® CloudFront Distro. Bad Request
In the mean time, you can log into your Amazon CloudFront Console and disable/delete your previous Distros manually, as a workaround. Once deleted, run s2Member's auto-configuration routines again.

Statistics: Posted by Jason Caldwell — December 21st, 2011, 11:08 am


]]>
2011-12-21T10:32:18-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=58401#p58401 <![CDATA[Re: Error code: 400]]>
I manage to get s2Member v111216 and replace Version 111220 to see if I can get it work and I am still getting same error and ask me to update to Version 111220

Statistics: Posted by govpatel — December 21st, 2011, 10:32 am


]]>
2011-12-21T10:03:35-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=58396#p58396 <![CDATA[Re: Error code: 400]]> Statistics: Posted by Jason Caldwell — December 21st, 2011, 10:03 am


]]>
2011-12-21T09:58:08-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=58394#p58394 <![CDATA[Re: Error code: 400]]> Unable to auto-configure Amazon® CloudFront Distributions.
Error code: 400. Error Message: Unable to delete existing Amazon® CloudFront Downloads Distro. Unable to delete existing Amazon® CloudFront Distro. Bad Request

I have download the from link that Jason Caldwell posted and replaced S2member plugin files in plugins folder and am still getting same error.

I have multisite setup but I only using parent site ( main website) for now there are no child sites setup.

I would appreciate your help

Govindji Patel

Statistics: Posted by govpatel — December 21st, 2011, 9:58 am


]]>
2011-12-09T10:51:21-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56159#p56159 <![CDATA[Re: Error code: 400]]> Thanks for the confirmation Sam. You're very welcome.
~ Sorry it took us so long to come to this conclusion.

100% resolved in development copy

Statistics: Posted by Jason Caldwell — December 9th, 2011, 10:51 am


]]>
2011-12-09T10:33:15-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56158#p56158 <![CDATA[Re: Error code: 400]]>
"Amazon® CloudFront Distributions auto-configured successfully. Please allow 30 minutes for propagation"

Perfect :D

Created the ClodFront distribution (streaming/downloading), Bucket policy and the crossdomain.xml file

I tested it twice. It works

Thank you so much for spending time on this. Awesome Job

Sam

Statistics: Posted by drbyte — December 9th, 2011, 10:33 am


]]>
2011-12-09T09:44:21-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56155#p56155 <![CDATA[Re: Error code: 400]]> Looking for confirmation please.

I believe this issue has finally been corrected in the development copy of the s2Member Framework.
You can grab the latest development copy here: http://downloads.wordpress.org/plugin/s2member.zip

Changelog in this regard ...

1. Propagation issues with Origin Access Identities ( resolved, accounted for by s2Member ).

2. WordPress® function wp_remote_request() does NOT support HTTP method "DELETE" when using the cURL transport layer. Your server MUST have a php.ini file with allow_url_fopen=on in order for this to work as expected inside WordPress. cURL is fine for everything but "DELETE" operations. s2Member auto-resolves this issue, so long as allow_url_fopen=on in your php.ini file.

See also: viewtopic.php?f=36&t=2636
See also: viewtopic.php?f=36&t=247

@TODO: Create ticket at WordPress.org regarding the absence of support for "DELETE" in the cURL transport layer of the WP_Http_Curl{} class. This is really a WordPress® issue.

100% resolved in development copy

Statistics: Posted by Jason Caldwell — December 9th, 2011, 9:44 am


]]>
2011-12-09T01:53:08-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56119#p56119 <![CDATA[Re: Error code: 400]]> Statistics: Posted by Jason Caldwell — December 9th, 2011, 1:53 am


]]>
2011-12-09T01:33:20-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56113#p56113 <![CDATA[Re: Error code: 400]]> Statistics: Posted by drbyte — December 9th, 2011, 1:33 am


]]>
2011-12-09T01:21:44-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56110#p56110 <![CDATA[Re: Error code: 400]]> Thanks for the follow-up Sam.
I'll check with beta testers on this issue and post updates as details become available.

Statistics: Posted by Jason Caldwell — December 9th, 2011, 1:21 am


]]>
2011-12-09T01:16:50-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56109#p56109 <![CDATA[Re: Error code: 400]]>
1.jpg
2.jpg
3.jpg
4.jpg

That's all what is happening


Sam

Statistics: Posted by drbyte — December 9th, 2011, 1:16 am


]]>
2011-12-09T00:51:34-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56102#p56102 <![CDATA[Re: Error code: 400]]> viewtopic.php?f=4&t=15853&p=56155#p56155

Jason Caldwell wrote:
Update: This patch has been revised on the advice of two beta testers.
If you downloaded the previous patch file and still had trouble, please update to this latest patch please.

files-in.inc.php.zip

Statistics: Posted by Jason Caldwell — December 9th, 2011, 12:51 am


]]>
2011-12-09T00:48:51-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56100#p56100 <![CDATA[Re: Error code: 400]]>
I just tried that and it is still giving 400 error. It creates everything else but the bucket policy

I am waiting on CloudFront to finish creating the distribution then I will try again

Thank you

Sam

Statistics: Posted by drbyte — December 9th, 2011, 12:48 am


]]>
2011-12-09T00:44:36-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56098#p56098 <![CDATA[Re: Error code: 400]]> viewtopic.php?f=4&t=15853&p=56155#p56155

Update: This patch has been revised on the advice of two beta testers.
If you downloaded the previous patch file and still had trouble, please update to this latest patch please.
files-in.inc.php.zip

Statistics: Posted by Jason Caldwell — December 9th, 2011, 12:44 am


]]>
2011-12-09T00:22:52-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56097#p56097 <![CDATA[Re: Error code: 400]]> viewtopic.php?f=4&t=15853&p=56155#p56155

Hi Sam. Thanks for the follow-up.

I think we've just discovered why this is happening. It has to do with the "Id" property in the Bucket Policy. s2Member uses an MD5 hash of "s2Member/CloudFront", which would be the same across multiple instances of s2Member. Thus, the Id field would be rejected with a 400 error code by Amazon.

I'm attaching a patch file for you. Please unzip and allow the attached file to override your existing copy of: /s2member/includes/classes/files-in.inc.php. Please use this patch against an existing installation of s2Member v111206.

files-in.inc.php.zip

Statistics: Posted by Jason Caldwell — December 9th, 2011, 12:22 am


]]>
2011-12-08T12:35:17-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56067#p56067 <![CDATA[Re: Error code: 400]]>

Until we have this issue resolved, here are some possible solutions:


1. Use only ONE Bucket for each instance of s2Member ( problem solved ).


That's what I have been trying to do. One bucket per one S2 instance. But 400 error kept coming and that's why I was forced to use the one bucket method.

Everything seems OK including CloudFront distributions that S2 creates, but when it comes to creating the bucket policy it fails. S2 even creates the grantee for that specific bucket but that's all.

amazon-01.jpg

I will use your second option if there is no way out of this.

I am still waiting on Amazon Support to figure out the huge amount of usage during the first 4 days. I have been monitoring the usage since then (Not editing post or having the HTM5 as a second option)

Usage between
5th/Dec - 8th/Dec

611.198 to 621.152

usage between 1/Dec to 5/Dec
0 to 611.198GB

Anyhow, sorry for taking your time on this. I will keep trying and post any new finding here

Thank you

Sam

Statistics: Posted by drbyte — December 8th, 2011, 12:35 pm


]]>
2011-12-08T07:04:22-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56056#p56056 <![CDATA[Re: Error code: 400]]> Thanks for reporting this important issue Sam.

Form the 1st to the 4th. I got about 600GB of AWS data transfer out. all what I was doing is changing my files from using Wowza to AWS. I was viewing the post for seconds of a time to check if the movie is playing correctly. But not more that 5 second at at time

But after taking this out form the code above

Code:
Code: Select all
    /* Else, try an HTML5 video tag. */
                {type: "html5", provider: "video",
                    config: {file: "<?php echo $mp4["url"]; ?>"}},

I'm not sure, but it sounds like something is preloading somewhere. You might check with JWPlayer to see if there are any known bugs in this regard. Otherwise, you said that you were moving files around? Is it possible that there are redirects involved somehow, causing files to be downloaded inadvertently?

I just took another look through s2Member's source code. s2Member never issues a file_get_contents(), or anything like that, on an Amazon®-hosted file. It will redirect visitors to Amazon®, based on a multitude of factors and your configuration; but it won't download the file and drive up bandwidth. Hmm, let me know if you have anything more on this topic. I'm curious about why this happened.

Statistics: Posted by Jason Caldwell — December 8th, 2011, 7:04 am


]]>
2011-12-08T06:55:29-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56055#p56055 <![CDATA[Re: Error code: 400]]> OK. What you said here...

I believe all sub sites are copying the main site credentials of S3 and CloudFront. Meaning it's not able to recreate the bucket policy because there is one present and it's not able to change it becuase it belongs to the parent site. I think this problem only exist if you set the multisite configured as a sub directory not as a sub domain. Meaning http://www.sub.site.com Vs. http://www.site.com/subsite

Not exactly. The underlying issue here is with s2Member's auto-configuration routines for the Amazon S3/CloudFront combo, which are designed to setup and configure various Amazon requirements between your Bucket and your Distributions. s2Member assumes that you're creating a new Amazon® S3 Bucket, for each instance of s2Member. So inside a Multisite Network installation, each instance of s2Member ( i.e. each Child Blog in the Network ) should be associated with a Bucket that is dedicated to serving protected files for that Child Blog ( i.e. for that instance of s2Member ).

s2Member creates a new Origin Access Identity for each set of Distributions that it configures ( one Origin Access Identity for each instance of s2Member ). It does this, because in order for your CloudFront Distributions to be connected to an Amazon S3 Bucket, s2Member has to update the Amazon S3 Bucket Policy with the Origin Access Identity that it created. Unfortunately, s2Member's auto-configuration routines are NOT yet capable of piecing together existing Bucket Policies in an attempt to preserve any existing permissions granted for other Distributions. It simply assumes that each instance of s2Member is going to run with a Bucket dedicated to that instance.

I just took a quick look at the source code that handles this. It's not a quick fix by any standard. The Bucket ACLs, Policies, and the underlying configuration of s2Member's options are not designed to allow for this. At least, not through it's auto-configuration routines.

Until we have this issue resolved, here are some possible solutions:


1. Use only ONE Bucket for each instance of s2Member ( problem solved ).

2. Or, if you have multiple Child Blogs on a Multisite Network, and you really need to use the same Bucket for all Child Blogs across the entire Network, you can start fresh on the Main Site of your Blog please ( i.e. usually Blog ID #1 ). Allow s2Member to run it's auto-configuration routines for Amazon S3/CloudFront. Once everything is configured properly on the Main Site of your Network, create this directory and file:

/wp-content/mu-plugins/s2-site-options.php
( these are MUST USE plugins, see: http://codex.wordpress.org/Must_Use_Plugins )
Code:
<?php
add_filter 
("ws_plugin__s2member_options_before_checksum", "s2_site_options"); function s2_site_options (&$options = array ())
    {
        if (is_multisite () && is_array ($site_options = get_site_option ("ws_plugin__s2member_options")))
            foreach ($site_options as /* Use global Amazon® config. */ $key => $value)
                if (preg_match ("/^amazon_(?:s3|cf)_files_/", $key))
                    $options[$key] = $value;
        /**/
        return /* Options by reference. */ $options;
    }
?>
s2-site-options.zip
!With this file in place, there is no need to configure Amazon S3/CloudFront on any of your other Child Blogs in the same Network. All existing and/or future Child Blogs will essentially come pre-configured with your current configuration on the Main Site, with respect to Amazon S3/CloudFront. Some might see this as a great time-saver.

WARNING: checking the box in the s2Member UI panel, to re-configure your Amazon/CloudFront Distributions, on any other Child Blog in the Network ( or on any other remote installation of WordPress, for that matter ), will effectively destroy what you've accomplished. Don't do it. Auto-configure your Amazon S3/CloudFront Distributions on the Main Site of your Network only. All other Child Blogs in the Network will use that configuration, and should NOT be re-configured again.

If you do this by accident, go back to your Main Site and re-run s2Member's auto-configuration routines all over again. Child Blogs will inherit their configuration from the Main Site.

Statistics: Posted by Jason Caldwell — December 8th, 2011, 6:55 am


]]>
2011-12-08T04:53:15-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56048#p56048 <![CDATA[Re: Error code: 400]]> I'm reading through this now.

Statistics: Posted by Jason Caldwell — December 8th, 2011, 4:53 am


]]>
2011-12-07T18:16:03-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=56008#p56008 <![CDATA[Re: Error code: 400]]>
I think there is a bug in the multisite installation of WordPress and S2. I can't pinpoint the problem but here's what's happening

I believe all sub sites are copying the main site credentials of S3 and CloudFront. Meaning it's not able to recreate the bucket policy because there is one present and it's not able to change it becuase it belongs to the parent site. I think this problem only exist if you set the multisite configured as a sub directory not as a sub domain. Meaning http://www.sub.site.com Vs. http://www.site.com/subsite

I did few tests just to get my theory in place.

I entered all AWS credintial form the parent site to other sub sites without having it to reconfigure and recreate bucket policy and CloudFront distributions.

It did not work becuase the sub site was not configured correctly and it was looking for a CloudFront streaming server that does not exist.

Having said that, if you manually enter CloudFront where it says:
Yes, I want s2Member to auto-configure using custom CNAMES that I'll setup
Amazon® CloudFront CNAME for Streaming Files ( optional ):
Enter the parent site CloudFront distribution server name: xxxxxxx.cloudfront.net

Save and make sure the top (reconfigure) option is NOT marked

Now, create a post and use the JWPlayer along with CloudFront streaming. Use a file that is in the main bucket you created by the parent site...Walla.. Works fine.

Can you please confirm this with WordPress Multisite set as sub directory

Thank you

Statistics: Posted by drbyte — December 7th, 2011, 6:16 pm


]]>
2011-12-07T05:02:58-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=55956#p55956 <![CDATA[Re: Error code: 400]]>
Thank you for looking into this error

I upgraded S2 and got me the log file.

Code:
array (
  'option_value' => '',
  'option' => 'pro_recaptcha_private_key',
  's3c' =>
  array (
    'bucket' => 'xxxxxxxxxxxxx',
    'access_key' => 'xxxxxxxxxxxxxx',
    'secret_key' => 'xxxxxxxxxxxxxxxxxxx',
  ),
  'cfc' =>
  array (
    'distros_s3_access_id' => '455e7e83exxxxxxxxxxxxxx5cc617baf7ddc08xxxxxxxxxxxxxxxxxxxxxxxxxxx9876xxxxxxxxf6802f0',
  ),
  's3_date' => 'Wed, 07 Dec 2011 09:17:47 GMT',
  's3_location' => '/?policy',
  's3_domain' => 'xxxxxxxxxxxxxxxxx.s3.amazonaws.com',
  's3_signature' => 'gv+cxxxxxvtG5Rxxxxxxxxxxxxxxxxxx=',
  's3_args' =>
  array (
    'method' => 'PUT',
    'body' => '{"Version":"2008-10-17","Id":"xxxxxxxbb1xxxxxxxxxx65b2","Statement":[{"Sid":"s2Member/CloudFront","Effect":"Allow","Principal":{"CanonicalUser":"455e7exxxxxxxxxxx17baf7xxxxxxxx7ed841c724a861c129xxxxxxxxxxx4f6xxxxxxxxxxx2f0"},"Action":"s3:GetObject","Resource":"arn:aws:s3:::xxxxxxxx/*"}]}',
    'headers' =>
    array (
      'Host' => 'xxxxxxxx.s3.amazonaws.com',
      'Content-Type' => 'application/json',
      'Date' => 'Wed, 07 Dec 2011 09:17:47 GMT',
      'Authorization' => 'AWS AxxxxxxxxxxQQ:gvxxxxxxxxxFpCSwxxxxxxxxxx90i8s=',
    ),
  ),
  's3_response' =>
  array (
    'code' => 200,
    'message' => 'OK',
    'headers' =>
    array (
      'x-amz-id-2' => 'sn2wClZ0xxxxxxxxxxxxxxCLxxxxxxxx99xxxxxxxxxxxxxxU6iXg',
      'x-amz-request-id' => 'FA8xxxxxxx56676xxxxxxxDAC',
      'date' => 'Wed, 07 Dec 2011 09:17:50 GMT',
      'content-length' => '0',
      'connection' => 'keep-alive',
      'server' => 'AmazonS3',
    ),
    'body' => '',
    'response' =>
    array (
      'headers' =>
      array (
        'x-amz-id-2' => 'sn2wClZ06ExxxxxxxxxxfHvMWa5xxxxxxxxxxLSK29xxxxxxxxxxU6iXg',
        'x-amz-request-id' => 'Fxxxxxxxx56676xxxxxxxxC',
        'date' => 'Wed, 07 Dec 2011 09:17:50 GMT',
        'content-length' => '0',
        'connection' => 'keep-alive',
        'server' => 'AmazonS3',
      ),
      'body' => '',
      'response' =>
      array (
        'code' => 200,
        'message' => 'OK',
      ),
      'cookies' =>
      array (
      ),
      'filename' => NULL,
    ),
  ),
  's3_owner_tag' =>
  array (
    0 => '<Owner><ID>80d89cf4xxxxxxxxxxxxxxe718xxx2e3exxxxxxxxxxxxb145c200</ID><DisplayName>xxxxxxxxx</DisplayName></Owner>',
    1 => '<ID>80d8xxxxxx7a748xxxxxxxxxx7xxxx3e2ecxxxxxxxxxxxx145c200</ID><DisplayName>xxxxxxxx</DisplayName>',
  ),
  's3_owner_id_tag' =>
  array (
    0 => '<ID>80d8xxxx4790c57a7xxxxxxxxxxxc7296xxxxxxxxxxx5c200</ID>',
    1 => '80dxxx4790c5xxxxxxxxxxx7xxxxx2e3e2xxxxxxxxxa2b145c200',
  ),
  's3_owner_display_name_tag' =>
  array (
    0 => '<DisplayName>xxxxxxxxxx</DisplayName>',
    1 => 'xxxxxxxxx',
  ),
  's3_owner' =>
  array (
    'access_id' => '80xxxxxxxxcf4790c5xxxxxxxxxxxxxxxxe2ec7xxxxxxxxxxx145c200',
    'display_name' => 'xxxxxxxxxxx',
  ),
  's3_acls_xml' => '<AccessControlPolicy><Owner><ID>80d89cf4xxxxxxxxxxxxxxxx71xxx12exxxxxxxxxx6a2b145c200</ID><DisplayName>xxxxxxx</DisplayName></Owner><AccessControlList><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>80d8xxxxxxxxxxxxxe718712e3e2xxxxxxxxxxxx45cxxxx00</ID><DisplayName>xxxxxxxxx</DisplayName></Grantee><Permission>FULL_CONTROL</Permission></Grant><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>455exxxxxxxxxxx7e46bcf4667axxxxxxxxxxxxxxxxee22177dexxxxxxxxxxxxxxxf6802f0</ID><DisplayName>s2Member/CloudFront</DisplayName></Grantee><Permission>READ</Permission></Grant></AccessControlList></AccessControlPolicy>',
  's3_policy_json' => '{"Version":"2008-10-17","Id":"7xxxxxxx56bxxxxxxxxxxxxxxxxxx6xxxxxx2","Statement":[{"Sid":"s2Member/CloudFront","Effect":"Allow","Principal":{"CanonicalUser":"455xxxxxxxxx667a5c36c617baf7xxxxxxxxxxxxxxx129ee2xxxxxxx2cxxxxxxxx02f0"},"Action":"s3:GetObject","Resource":"arn:aws:s3:::xxxxxxxxxxxxxx/*"}]}',
)


I wont be able to post the original file in here Jason. I has all my info (Scary)

Any way, I copied the policy above to the bucket policy and it seems OK. It's still creating the CloudFront distribution. Once it's done I will try and see if it's working

Thank You

UPDATES: I am still getting this error

Unable to auto-configure Amazon® CloudFront Distributions.
Error code: 400. Error Message: Unable to update existing Amazon® S3 ACLs. Unable to update existing Amazon® S3 Bucket Policy. Bad Request

The ClouldFront distribution was created but nothing else

One other problem I am facing Jason

Using S3/CloudFront/JWPlayer/Streaming/HTML5 fallback - NO Download

Code:
<div id="jw-container"></div>
<script type="text/javascript" src="/jwplayer/jwplayer.js"></script>
<?php
$cfg = array ("file_download" => get_post_meta(get_the_ID(), "movie", true), "url_to_storage_source" => true, "count_against_user" => true); ?>

<?php if (($mp4 = s2member_file_download_url ($cfg, "get-streamer-array"))) { ?>

<script type="text/javascript">
        jwplayer("jw-container").setup({modes: /* JW Player. */
        [
            /* First try real-time streaming with Flash player. */
            {type: "flash", provider: "rtmp", src: "/jwplayer/player.swf",
                config: {streamer: "<?php echo $mp4["streamer"]; ?>", file: "<?php echo $mp4["file"]; ?>"}},

           /* Else, try an HTML5 video tag. */
            {type: "html5", provider: "video",
                config: {file: "<?php echo $mp4["url"]; ?>"}},       

        ],
      autostart: true,
      controlbar: "bottom",
      skin: "http://www.site.com/glow.zip",
        /* Set video dimensions. */ width:480, height: 320
        });
    </script>


Form the 1st to the 4th. I got about 600GB of AWS data transfer out. all what I was doing is changing my files from using Wowza to AWS. I was viewing the post for seconds of a time to check if the movie is playing correctly. But not more that 5 second at at time

But after taking this out form the code above

Code:
/* Else, try an HTML5 video tag. */
            {type: "html5", provider: "video",
                config: {file: "<?php echo $mp4["url"]; ?>"}},


The data out is barely moving

I contacted Amazon and they are looking into this now.

Here are some of the out log files

Code:
<OperationUsage>
        <ServiceName>AmazonS3</ServiceName>
        <OperationName>GetObject</OperationName>
        <UsageType>DataTransfer-Out-Bytes</UsageType>
        <Resource>xxxxxxxxxx</Resource>
        <StartTime>12/04/11 04:00:00</StartTime>
        <EndTime>12/04/11 05:00:00</EndTime>
        <UsageValue>19750475516</UsageValue>
    </OperationUsage>

That's 18.3940637074411 18GB of transfer


Code:
<OperationUsage>
        <ServiceName>AmazonS3</ServiceName>
        <OperationName>GetObject</OperationName>
        <UsageType>DataTransfer-Out-Bytes</UsageType>
        <Resource>xxxxxxxxx</Resource>
        <StartTime>12/03/11 13:00:00</StartTime>
        <EndTime>12/03/11 14:00:00</EndTime>
        <UsageValue>57150542691</UsageValue>
    </OperationUsage>

That's 57150542691 53GB of transfer ..Impossible


Last month was not even 20% of that. The only diffrence is that I was not updating my posts.
Total posts being updated from the 1st to the 4th..I would say about 800 of them

I did not have many user log ins during those days too. Most of the post views where mine

I got this form amazon now:

At this moment we are unable to ascertain the reason for the large volume of transfer on your S3 Bucket. We have requested the relevant team to investigate your matter and we hope get back to you as soon as possible.

Sam

Statistics: Posted by drbyte — December 7th, 2011, 5:02 am


]]>
2011-12-07T03:29:06-05:00 http://www.primothemes.com/forums/viewtopic.php?t=15853&p=55950#p55950 <![CDATA[Re: Error code: 400]]> viewtopic.php?f=4&t=15853&p=56155#p56155

Hi Sam. Sorry for the delayed response.
Been working on WordPress 3.3 issues, and the release of s2Member v111206.
http://wordpress.org/extend/plugins/s2member/changelog/

If this is still a problem for you, please unzip and upload the attached DEBUG file. Allow it to override your existing copy of /s2member/includes/classes/files-in.inc.php ( please make a backup of the original file though ). Please do this against an existing installation of s2Member v111206.

Now, once this debugging file is in place, please run your tests again. Then check for the existence of: /wp-content/s2-s3-debug.log. Please post the log entries from that file so I can look for possible explanations.

Once your tests are completed, replace the original file, and get rid of the DEBUG file.
files-in.inc.php.zip

Statistics: Posted by Jason Caldwell — December 7th, 2011, 3:29 am


]]>