Sun, Jan 5, 8:03 AM CST

Renderosity Forums / Poser Python Scripting



Welcome to the Poser Python Scripting Forum

Forum Moderators: Staff

Poser Python Scripting F.A.Q (Last Updated: 2024 Dec 02 3:16 pm)

We now have a ProPack Section in the Poser FreeStuff.
Check out the new Poser Python Wish List thread. If you have an idea for a script, jot it down and maybe someone can write it. If you're looking to write a script, check out this thread for useful suggestions.

Also, check out the official Python site for interpreters, sample code, applications, cool links and debuggers. This is THE central site for Python.

You can now attach text files to your posts to pass around scripts. Just attach the script as a txt file like you would a jpg or gif. Since the forum will use a random name for the file in the link, you should give instructions on what the file name should be and where to install it. Its a good idea to usually put that info right in the script file as well.

Checkout the Renderosity MarketPlace - Your source for digital art content!



Subject: Python Extension (.pyd) Call-Behavior Recommendations?


Spanki ( ) posted Mon, 24 March 2008 at 3:06 PM · edited Sat, 28 December 2024 at 2:34 PM

Hi Guys,

I've recently started working on a compiled extension module (_tdmt.pyd) to help speed up the tools that Cage and I have been working on in this thread (I linked to page 32, whch is about where talk of the extension starts... the latest download of the extension is on page 36):

http://www.renderosity.com/mod/forumpro/showthread.php?thread_id=2677445&page=32

...anyway, my background is as a C/C++ developer, so I'm still relatively new to Python in general and very much so for writing a C extension for Python  :).  Which leads me to a few questions, regarding how a Python programmer might 'expect' calls to the extension to behave.

There are really 2 areas in question (but I'll list the second one in a separate post)....

  1. argument validation / type-checking / range-checking / failure strategies

This topic/question deals with the way the extension has to retrieve arguments to function calls internally... normally - in C/C++ - the compiler itself can validate/verify that at least all of the arguments to a function are at least of the correct 'type' (int, long, float, a pointer to some specific structure, etc).  But of course since Python is an interpreted language, everything is a 'Python object' and must be decoded by my C code before it can figure out / verify what type of data is being passed to it.

This is even more complicated by Python lists of items, or lists of lists of items (and even testing the 'types' of elements of the items, and/or elements within elements... etc).  So, for example, let's take a relatively simple example from my extension...


PolyFaceNormals()

Syntax: PolyFaceNormals( polylist, vertexlist)
Return:

Unlike the Generate_TriPoly_Normals() method above, this one takes a regular polygon list (ie. as created by the psrPolygonList() method, above) and returns a list of normal vectors. 

Unlike previous implementations of this function, this one assumes that it's dealing with Ngons, so it averages the normals of the triangles that make up each Ngon to come up with the face normal. This should help account for non-planar polygons (to some extent).


....so note that  would be a list-of-lists-of-ints (one of my other routines returns data in this format) and  is a list of new VectorType elements (the VectorType is also provided by my extension - basically a triplet of floats, with various support methods).

In the C code, my implementation of this routine first has to validate that the number and types of arguments are correct (a Python ListType, followed by a Python ListType).  But it then needs to delve further and look at what's stored in those lists (Python ListType, VectorType).  On the first argument, it still needs to decode it further, to see what's in the lower level list (IntType).

In addition to the above type-checking, further tests are done to ensure that:

  • non of the lists are 'empty'
  • various other range-checking (polygon indices < 0 or greater than the number of vertices passed in, etc)
  • various memory allocations succeed (I have to allocate a new list of vectors to return)

...so this gets us (finally) to the meat of my question...

If any of the above tests 'fail', then I have a couple of choices:

  1. Set the Python error-string to reflect the failure condition and return NULL (causes an exception).

  2. Return an 'empty' list  (which may end up getting passed to some other routine if the Python scriptor doesn't check it first).

  3. Return 'None' instead of the expected list of VectorType(s).

...so far, in most cases, I've been doing option #1. I think that's probably the 'right' way to do it, though there may be cases where returning option #2 or #3 might be appropriate if documented as such.

Any comments/suggestions?

Thanks,

  • Keith

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


svdl ( ) posted Mon, 24 March 2008 at 3:31 PM

As a programmer I prefer option #1 in just about every possible case.

The pen is mightier than the sword. But if you literally want to have some impact, use a typewriter

My gallery   My freestuff


nruddock ( ) posted Mon, 24 March 2008 at 4:01 PM

I suspect that looking at how modules like PIL and Numeric do things in their C code would help

Some argument processing will possibly be better done in Python before calling into the C code.

Verifying the contents of lists of lists seems like overkill, it might be better to call type conversion method from with the C, or even dispense with these checks but make sure that the contents are of the correct type when inserted (e.g. by using your own list wrapper class that does the checking  and/or conversion at insertion).


Spanki ( ) posted Mon, 24 March 2008 at 4:22 PM

Part II - New Instance vs. Pointer / Reference

My second question has to do with how various data/objects get returned from my new 'type' implementations... for example, here are two new types implemented by the extension...

Type:    <VectorType><br></br>
Members:<br></br>
x  -  <FloatType>   ( aliases: 
vec.u, vec.r, vec.wgt0, vec[0] )<br></br>
y  -  <FloatType>   ( aliases: 
vec.v, vec.g, vec.wgt1, vec[1] )<br></br>
z  -  <FloatType>   ( aliases: 
vec.w, vec.b, vec.wgt2, vec[2] )<br></br><br></br>
Type:    <TriPolyType><br></br>
Members:<br></br>
v0           
- <IntType>   ( triangle vertex
indices... )<br></br>
v1           
- <IntType><br></br>
v2            - <IntType><br></br>
uv0          
- <IntType>   ( triangle texture vertex
indices... )<br></br>
uv1          
- <IntType> <br></br>
uv2          
- <IntType><br></br>
polyIndex     -
<IntType>    ( index of ngon that spawned
the tripoly )<br></br>
triangleIndex - <IntType>    ( index of
triangle within the above ngon )<br></br>
plane         -
<FloatType>  ( pre-computed plane equation )<br></br>
normal        -
<VectorType> ( face normal vector )<br></br>

...Note that internally, my code doesn't store pointers to Python Objects for simple types like ints or floats, but for complex data types (like the 'normal' member of the TriPolyType), it does (incrementing and decrementing the reference counts as needed).

The next thing to note is that, as a 'type' implementation / extension, my C code is basically a 'handler' for any data of the new type, so the Python interpreter calls some routine in my code any time it needs to get info about or operate on my new type.  So let's look at some simple example code...

verts = [Vector(0.0, 0.0, 0.0) for i in range(3)]  #
create 3 new vectors with not-very-useful positions<br></br>
norm = Vector(0.0, 1.0, 0.0)  # psuedo normal vector,
pointing up (maybe down :) )<br></br>
tp = TriPoly(0, 1,
2)         # create a
new tripoly,with 0/1/2 indices - we'll fill in the normal
afterwards<br></br>
tp.normal = norm<br></br>

...ok, so most of the actual values being used above psuedo code are meaningless (all 3 vertices or the tripoly would be at 0.0, etc), so I just wanted some 'structure' to talk about :).  Given the above, if we do:

ndx0 = tp.v0

...then what gets returned from my handler is a 'new' Python Object, with the value set to whatever I had stored in the v0 member ('0', in this case).  However, if you do:

newnorm = tp.normal

...currently, what my code does it bump the reference count on the normal member (a pointer to a Python Object of type (which my code also happens to implement, but it could be a list or some other type)) and returns the object as a pointer/reference to the one being stored in the internal tripoly structure.

It's only recently occured to me that, while handy in some cases, it might also be the cause of some hard to track down bugs in people's python scripts.  Consider the following...

newnorm.y = -1.0

...now, not only does the script's local 'newnorm' varible have it's y axis set to -1.0, but the normal stored in "tp.normal" also got changed (and this is a pretty simplistic example.. that tripoly might well be some nth index into a larger list of them, which might be part of an even larger mesh-type structure, etc).

My new class does provide a way to work-around this particular issue, using it's .clone() method...

newnorm = tp.normal.clone()

...that would end up with a new instance/copy of the normal, instead of a pointer to the existing one, but I'm fairly fast closing in on the decision/opinion to just always return a new instance/copy of the normal, instead of a pointer/reference.

Thoughts / Comments?

Thanks,

Keith

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Mon, 24 March 2008 at 4:31 PM · edited Mon, 24 March 2008 at 4:36 PM

Thanks for the comments guys.  I had considered doing my own list-wrapper types early on, but decided in favor (for now, at least) of keeping the existing convienience/flexibility of Python lists (convienient for the python programmer, that is :) ).

That particular issue is not really so much one of speed (the compiled code doing those tests is light-speed compared to python code doing most anything), but more of one of:

  1. teduousness - it's a pain in the ass :)
  2. lots of 'strings' ending up in my module, to handle the various error conditions.

...the first one is just grunt-work, so no big deal and the second is not too big a deal either.  Both are more just "distastefull" for a (spoiled) C/C++ programer to have to deal with :).

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


ockham ( ) posted Tue, 25 March 2008 at 2:47 PM

It's been a while since I did any of that SWIG stuff, but I remember
running into so many undocumented obstacles that I just stayed with
char * and int for a few direct arguments to C functions.  (I believe this
was recommended by some of the SWIG guidance...?)

For more complicated sets of data, I passed them as text files in
both directions.  That way the Py could write and read using its own
methods, and the C could write and read in pure C style. 

My python page
My ShareCG freebies


Spanki ( ) posted Tue, 25 March 2008 at 5:28 PM · edited Tue, 25 March 2008 at 5:29 PM

Yeah, I ended up taking the Cookbook Approach (chapter 4.1), so I don't use SWIG or distutils or any other helper, which makes things a bit more complex, but ultimately more transparent for the Python script side of things and more transparent to me as I learn how all this works :).

On my second question above, for example, last night I went through and re-wrote all my code to only ever return 'new' instances of Python Objects from class member access calls, instead of pointers/references to existing objects, in some cases...

I thought this might be the best approach, but now I'm not so sure and might go back to the old way.  For example, you used to be able to do this:

tp = TriPoly()    # create a new  object, with it's members initialized to 0
tp.normal.x = 1.0    # alter the 'x' value of it's  'normal' member

...but since the epression "tp.normal" now results in a copy instead of a pointer to the existing being stored there, the "normal.x = 1.0" part of the expression ends up assigning 1.0 to the wrong place (to a temporary variable inside the python interpreter, that is instantly deleted :), resulting in the entire statement being a useless non-event).

With my current code, you'd have to:

tmp_norm = tp.normal
tmp_norm.x = 1.0
tp.normal = tmp_norm

...to do the same thing.

As mentioned, I think I'm going to revert to the original code and follow the behavior of the Python List type, to a large extent, which stores and returns pointers, but due to the way the interpreter works, simple ints and floats get interpretted as 'new' assignments.  For example:

>>> list = [1, 2.0, Vector()]<br></br>
>>> list<br></br>
[1, 2.0, Vector at <01294200> = (0, 0, 0)]<br></br>
>>> someint = list[0]; somefloat = list[1]; somevec =
list[2]<br></br>
>>> someint<br></br>
1<br></br>
>>> somefloat<br></br>
2.0<br></br>
>>> somevec<br></br>
Vector at <01294200> = (0, 0, 0)<br></br>
>>> someint = 2<br></br>
>>> somefloat = 3.0<br></br>
>>> somevec.x = 99.0<br></br>
>>> list<br></br>
[1, 2.0, Vector at <01294200> = (99, 0, 0)]<br></br>

...notice that modifications to 'someint' and 'somefloat' didn't change the values stored in the list, because the statements were interpretted as new assignments (new python variable creation/re-creation, rather than assigning the value to the object those variables pointed to).  However changes to a member of 'somevec' did in fact alter what was in the list.

I think using that model (and explanation in the docs) will work ok, noting that my new types will all have a '.Clone()' method to work-around this issue (if that's what you want)...

>>> list<br></br>
[1, 2.0, Vector at <01294200> = (99, 0, 0)]<br></br>
>>> somevec = list[2].Clone()<br></br>
>>> somevec<br></br>
Vector at <01294F60> = (99, 0, 0)<br></br>
>>> somevec.z = 33.0<br></br>
>>> somevec<br></br>
Vector at <01294F60> = (99, 0, 33)<br></br>
>>> list<br></br>
[1, 2.0, Vector at <01294200> = (99, 0, 0)]<br></br>

...by cloning the value, you get a new instance, instead of a pointer/reference to the existing one, so it doesn't affect the original one in the list.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.