Forum: Poser - OFFICIAL


Subject: The Great Poser Survey Results

ynsaen opened this issue on Feb 13, 2006 ยท 84 posts


ynsaen posted Thu, 16 February 2006 at 3:26 PM

That's actually not possible to do, Tunesy, in an open system. Think about it for a second. Unless you live in a police state, there is always a voluntary aspect to the poll, because the individual must choose to participate or not. The actual issue isn't "picks himself" -- it's how he ends up recieving that invitation to do so. Most scientific polls use an elimination algorithm on the final data gathered. What this means is that basically they would take the data offered above and sort it according to predefined criteria gathered primarily from previous polls and actually discard answers based on key demographic aspects (specifically: gender, income, zip code, and certain variable preferences). This algorithm utilizes a random generation rule as well, and, these days, is preprogrammed into the system for certain things such as phone surveys. Note the key there is the source of that algorithm, and the way it is implemented: after the collection process. The typical survey you see as a pretty snapshot in usa today with 500 respondents is actually pulled from a typical sampling of persons known to be willing to answer surveys. Usually, 2300 to 3100 are sampled, and then the algorithm is applied to reduce it down. Now, excepting the algorithm (which is an analysis level tool, not a data collecting tool) the methodology for getting the data is the same: the option is placed out there and the individuals choose to participate or not. In order to be sure the sample pool was wide enough, the survey was advertised across a great many sites and in several fields (the one I missed was education, which I need to find a method for getting in the next one). Language was a critical consideration as well (I had a spanish translation for it as well, but was unable to get it up and presented in time, so had to settle for only the german and French ones, and wasn't able to get a japanese one). The main impteus for the survey were newsletters and front page notices ont he major poser sites (which does skew things towards this overal community, it was intended as a community survey, so the effort was worthwhile). This ensured that the survey's reach went beyond the forums themselves and into the wider community that doesn't pay attention to the mad ramblings of us all. In order for the community as a whole to be able to be sampled randomly as that link describes it (and I know exactly what the point is -- one of my clients is a member of the NCPP that purchased and expanded on my previous poll from a few years back), there would need to essenitally be a single, combined database of all the various members of the community from which random samplings could be taken. That database would require basic demographic information already in it, or the culling algorithm would be needed to be used after the fact in order to reach the sampling rate specified. Establishing that database would require effort that's simply not possible: no site at present that I know can or would be willing to share their user information in order to form it, and I wasn't able to create a specific site for that purpose (registering to be eligible to take the quiz, essentially). There is too much competition between the various sites to allow for that level of cooperation on the scale that was involved here. So let's look at the numbers: I approached in a worldwide parking lot 200,000 people and asked them to answer a survey. Of that 200,000 people, 18,000 or so took a look. of that 18,000, 2100 started to do the survey. of that 2100, 700 or so finished it. That's an awfully random sample. Statistically speaking, it's roughly about as random as being able to randomly sample all of the purchasers across all of the main sites on a random basis -- which is how retail market research is done these days, as it's proven valid time and time again and is growing in popularity even more (and, I should note, from companies predominatly outside the ncpp as they are finding it more cost effective to move it in house). Since any of the sites doing so would be most unlikely to make that information freely available (which was the purpose of doing this and in part why the final results are so large a sample), that isn't an option. My own site -- Odd Ditty Foundry -- doesn't presently have a large enough population sample registered to be effective, and, again, to be honest, had I used it I wouldn't have shared the data. Basically, I did indeed roll up my sleeves. My problem isn't not understanding the nature of how to conduct a survey, but how to manage an online one of this scope with a limited toolset. One benefit of this experience has been that I was able to acquire a much more advanced tool for surveying that is more inline with my original goals and gives me a greater level of flexibility and control over the process and should reduce the difficulties involved in this one and still retain the integrity with which I've tried to operate this one -- and better, long term. It's also shown me a great many of the reasons that such a thing hasn't been done before, and why the community is still suffering from a huge disconnect between perception and reality insofar as the marketplace is concerned. As a community we need to be a little less cutthroat about some things, and a little more about others. Among those things that need to grow are the areas outside the "mainline" area where the "money" is -- particularly if we're going to grow and improve the overall reputation that users of poser have in the wider world. THis survey is one of the ways to do that. I resist going into this level of detail about my methods and motivations usually because, well, It's boring, it's never easily summated, and it means I make long ass posts that rendo always seems to lock on. But I do consider all of those things. I planned this survey for over a year. The questions alone took two months. I began the follow up in December, and made several changes in the time since the survey started, and now even more that I have this improved software for it. The arguments about the validity of the data are mostly off the mark -- they fail to take into account the fullness of the scope of it, or work off of aspects of analysis. I'm not supplying analysis (and I keep kicking myself for the observations I made in the first post), just the raw data. The rest is up to the individuals.

thou and I, my friend, can, in the most flunkey world, make, each of us, one non-flunkey, one hero, if we like: that will be two heroes to begin with. (Carlyle)